Was this helpful?
Thumbs UP Thumbs Down

Meta and Google confront first legal test over child safety claims

Meta logo on a glass building.
Meta logo displayed on a phone

Meet the young woman taking on two tech giants

A young woman identified in court as K.G.M. became the focal point of two landmark suits against major platforms. She remembers being six years old, watching YouTube. By nine, she had Instagram. By middle school, she was trapped in a cycle of comparison, depression, and body image struggles she couldn’t escape.

Her lawyer stood before the jury and called Meta and YouTube’s platforms addiction machines. He said they were deliberately engineered to hook developing brains. KGM’s case is the first of over 2,400 similar lawsuits to reach a jury.

Instagram logo displayed on a phone

Prosecutors went undercover as kids

New Mexico’s attorney general took an unusual approach. Instead of waiting for victims to come forward, his team created fake accounts posing as 13 and 14-year-olds on Facebook and Instagram. Then they existed.

Adult accounts flooded their inboxes with sexual solicitations. No encouragement. No baiting. Just children’s profiles and Meta’s recommendation tools are doing the rest. The state argues this proves Meta’s design choices actively connect predators with minors.

Meta logo on a glass building.

The internal report Meta hoped would stay buried

Court orders unsealed documents that companies spend millions trying to protect. In this case, those papers contained a number that made the courtroom go quiet.

Internal documents shown in court report an estimate of up to roughly 500,000 daily incidents under a broad measurement approach, a figure Meta has said is based on wide criteria and is disputed publicly by the company.

Court filings show staff asked whether parents could disable certain AI chat features for minors, and that the question was routed as a so-called Mark-level decision, indicating it required senior leadership approval.

Internal notes also warned these AI characters could be used in sexually explicit exchanges with minors. Meta has said it is reviewing safety controls for AI features.

Man interacted with notification concept

An addiction expert calls out the slot machine tricks

Dr. Anna Lembke has spent her career studying what happens to brains hooked on reward loops. When she took the stand, she explained how infinite scroll, autoplay, and unpredictable notifications tap into the same neural pathways as gambling.

But her most damaging testimony involved language. Internal Meta documents showed employees deliberately used the phrase Problematic Internet Use instead of the word addiction. Why? Because admitting addiction meant admitting liability.

Meta logo displayed on a phone screen

Meta’s defense emphasizes prior hardship

Meta’s attorneys told jurors that the plaintiff experienced domestic violence and early mental health struggles and argued that those long-standing issues, not Instagram alone, explain much of her harm.

He read aloud from her medical records. Domestic violence in her home. Verbal abuse from her mother. Therapy appointments start at age three.

His question to jurors was simple: If you took Instagram away and everything else stayed the same, would her life be completely different? The argument doesn’t deny that social media can harm teens. It just asks the jury to decide who bears responsibility when harm and hardship already exist.

Algorithm written in search bar concept

One hundred thousand children. every single day.

Here’s another number the New Mexico attorney general wants jurors to remember. Meta’s own internal estimates show roughly 100,000 minors experience online sexual harassment daily across its platforms.

The state’s argument isn’t that Meta fails to catch every bad actor. It’s that Meta’s architecture makes those bad actors easy to find. Algorithms suggest connections. Design features encourage interaction.

Youtube logo displayed on phone

The digital babysitting strategy that backfired

YouTube faces its own allegations in the Los Angeles trial. Plaintiffs argue the platform knew exactly what it was doing when it directed young children away from YouTube Kids. Ad rates on the main platform are higher. Much higher.

Mark Lanier, the plaintiffs’ attorney, accused YouTube executives of exploiting exhausted parents looking for thirty minutes of quiet. The term digital babysitting service landed hard in the courtroom. Google’s spokesperson maintains the allegations are simply not true and that child safety has always been central to YouTube’s mission.

Lawsuit

The legal shield that doesn’t cover this

Section 230 has long sheltered platforms from liability for third-party content. Plaintiffs are not challenging third-party posts. Instead, they are framing claims under state consumer protection and product liability theories to argue the companies misled the public or negligently designed products.

But these trials found a loophole. Neither case focuses on third-party content. New Mexico sued under the consumer protection law, arguing that Meta misled the public about safety features. California plaintiffs allege negligent product design, not harmful posts.

Several employees concept

Employees warned them, executives chose growth.

Whistleblowers such as Frances Haugen and former employees like Arturo Béjar have testified or been cited in reporting that researchers flagged harms and that growth metrics often took priority; the trial will examine how executives responded to internal warnings.

The legal question before jurors isn’t whether Meta knew. It’s what they chose to do with that knowledge.

Australian flag

Other countries aren’t waiting on Congress

Australia banned social media for under-16s in December 2025. France’s National Assembly passed an under-15 ban in January by a vote of 130 to 21. Spain announced similar restrictions this month. At least fifteen European governments are now considering their own age limits.

The United States? The last major federal child online safety law was passed in 1998, when most Americans still used dial-up internet. The Kids Online Safety Act cleared the Senate 91-3 in mid-2024, then stalled in the House. It was reintroduced last May. It has not received a floor vote.

Limits word written in wooden cubes

Teen accounts arrived late, attorneys say too late.

Instagram finally introduced teen accounts with built-in safeguards like content filtering, stranger limits, and time management tools. Meta presented these as voluntary improvements driven by concern for young users.

Eighteen state attorneys general see it differently. In a Monday court filing, they called the changes “a public relations measure offering minimal real protections.”

The states point to internal metadata indicating the company was concerned some safety defaults could reduce teen engagement and cited specific projected declines in usage that influenced its decisions.

A judge with a gavel

Eighteen states want a judge to redesign Instagram

The same coalition of attorneys general isn’t just asking for money. They’re asking a federal judge to force Meta to make specific, permanent changes to its platforms.

Their proposal includes removing all known accounts belonging to users under 13, deleting the data Meta collected from those minors, disabling infinite scroll and autoplay for teen accounts, and prohibiting young users from accessing the apps during school hours and overnight.

Want to see how Meta is expanding on the AI side while facing this legal pressure? Take a look at how the company is strengthening its AI portfolio with a major purchase.

New Mexico flag

What changes when a jury decides

We have watched tech CEOs apologize in congressional hearings. We have watched them promise to do better, hire more moderators, and adjust their algorithms. We have not, until this week, watched twelve ordinary citizens decide whether those promises were broken in ways the law cares about.

The Los Angeles jury will determine if Kaley’s addiction was foreseeable and preventable. The New Mexico jury will decide if Meta’s design choices created an environment where children are routinely endangered. Neither verdict will fix everything overnight.

Curious how other tech giants are positioning themselves as this unfolds? You might want to read how Google is preparing Gemini features to ease the transition from ChatGPT.

If the courts ordered Instagram to turn off your feed during school hours, would you cheer or scream? Drop your take in the comments and hit that like if you’re following this case.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.