Was this helpful?
Thumbs UP Thumbs Down

Facebook sparks outrage over photo use in AI training

Man liking picture on Facebook feed
facebook app on the apple iphone display and apple macbook

The controversy

Meta has triggered widespread backlash after revealing it uses Facebook and Instagram photos to train its AI models. Many users were unaware that their images could be swept in without explicit opt-in.

The issue raises serious questions about consent, privacy, and the boundaries of user trust in large platforms. The next slides unpack how Meta uses those images, why people are upset, and what you can do about it.

Meta logo is shown on a device screen

Photo-training in Meta’s AI models

Meta’s Emu model was pre-trained on 1.1 billion image-text pairs, and Meta and reporters have said that publicly visible Facebook and Instagram content contributed to the datasets used to develop its image generation features.

Reporting and creator complaints say the datasets included many publicly shared photos that some users expected to be seen only by followers or friends, not ingested into AI training pipelines. The scale of data collection has alarmed privacy advocates and photographers alike.

Man liking picture on Facebook feed

Public posts used for training

In September 2024, Meta said it would resume using public British Facebook and Instagram posts (photos, captions and comments) from users aged 18+ to train its generative AI models.

The company said private messages and under-18 accounts would be excluded, but the use of publicly shared content still sparked outrage. Many users felt their content was being repurposed without fully informed consent.

iCloud logo on iPad

Private photos and “cloud processing”

Meta introduced opt-in features in the U.S. and Canada that, if you allow them, upload unpublished camera-roll photos to Meta’s cloud so its AI can group images, suggest collages, or offer editing ideas; users can toggle this setting off and delete uploaded data.

Even though Meta claims these images won’t immediately be used for training unless shared/edited, the fact that unused personal media is uploaded to its cloud raises new trust issues.

Person interacting with digital transparency icons.

Mislabeling and transparency problems

Meta faced backlash for mislabeling real photographs as “Made with AI” and for confusing users about how much of their data is used for training.

This lowered credibility with photographers and regulators, who said the company’s labeling approach undermined trust and violated transparency norms.

Businessman holding optout message card.

Consent and opt-out issues

While Meta claims users can object, critics point out that many users never realize their photos are being used for training.

Regulatory responses differ: Brazil’s data authority ordered Meta to stop harvesting local data in 2024, while EU/UK regulators pushed for clearer objection/opt-out processes, and in some cases, Meta provided objection forms for users.

But the process and deadlines varied by country, and some users report it’s still unclear how to opt out of processing for older posts.

Content recording a video using her smartphone on a tripod stand.

Photographer, creator outrage

Photographers and creators have protested that their images were used without licensing, attribution, or compensation. Legal challenges and prior cases over embedding and reuse show the issue is complex and not yet settled in courts.

The massive dataset pulled from “shared publicly anyway” doesn’t satisfy many creators who believe explicit permission was required. The lack of compensation or clarity on usage has become a flashpoint.

Risk alert concept

Deep-fake and likeness risks

With AI trained on massive image sets, the ability to replicate likenesses or styles increases, raising the potential for deep-fakes, impersonation, or image-based misinformation.

Users expressed fear that their faces or photos could be used in ways they never intended. The lack of robust safeguards or audit trails intensifies the risk.

Lawmaker concept with a gavel

Legal and regulatory scrutiny

Regulators in the UK, EU, and beyond are examining how Meta uses user content for AI training. The UK’s Information Commissioner’s Office (ICO) has raised concerns about fingerprinting and consent in tracking and AI training.

Meta’s approach may come under further investigation for compliance with data-protection laws.

A wooden blocks with the word impact written on it

Impact on user trust and platform reputation

The revelations have shaken user trust in Meta: if a platform you post on can repurpose your photo for AI training, you may rethink what you share.

This has reputational costs and may lead to user migration or broader backlash. Platforms dependent on user-generated content may face higher friction as privacy expectations rise.

Young person using a mobile phone

What users can do right now?

Users concerned about photo use should: set posts to “friends only” or private, review camera-roll upload settings, turn off opt-in features related to AI tools, and periodically audit what apps have access to your photos.

Knowing where your data flows is a step toward control. It also helps prepare for broader policy changes.

Meta apps displayed on a phone

Reviewing Meta’s privacy settings

Go into your Meta (Facebook/Instagram) account: check “Your Activity”, “Privacy Center”, and “AI data usage” settings. Opt out of data use if available in your region.

Consider deleting or anonymizing older publicly-shared photos if you’re uncomfortable with reuse. Keep an eye on updates to terms of service and image-training opt-in dialogs.

Set limits productivity advice on napkin

Choosing alternative platforms or tools

If you’re particularly privacy-sensitive and don’t want your photos used for AI training, consider limiting use of Meta’s AI features or using alternative platforms that explicitly exclude user data from model training.

Being aware of platform policies is increasingly critical for creators and individuals who value control over their media.

hand clicking on future button

Creator rights and licensing future

The issue raises broader questions about image licensing in the AI era: should platforms pay creators when using their works to train models?

Should users have clearer “photo-use for AI” disclaimers? Policy and industry standards may evolve to require express blanket permission or compensation for large-scale image use in AI systems.

whats next concept

What’s changing next

We can expect new policies, regulation of AI-training datasets, better labeling of AI-generated content, and opt-out mechanisms to become standard.

Meta and other platforms may tighten consent flows, improve transparency, and pay closer attention to creator rights. The public outcry may force faster change than regulators alone.

What’s changing for teens on Facebook? Explore why Facebook adds new teen account rules.

call to action concept person hand touching on smart phone

Call to action

Meta’s use of photos, public posts, and unpublished camera-roll images for AI training has triggered legitimate privacy and consent controversies.

While some users feel convenience outweighs risk, many believe tighter control, clearer opt-ins, and better creator rights are overdue. Review your settings today, advocate for stronger protections, and keep informed as policies evolve.

Is Meta finally putting teen safety first? Explore why Meta limits teen access for safety on Facebook.

Which photo-data practice worries you most: public post training, private gallery scanning, or lack of creator compensation, and what would you like to do about it? Tell us in the comments.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.