7 min read
7 min read

Last week, Meta and Anthropic scored landmark wins in U.S. courts, defending their right to train AI on copyrighted works without paying.
Judges ruled that using millions of books to build language models was fair use. This pivotal shift gives companies like OpenAI, Google, and Microsoft legal cover to train AI at scale.
For creators and publishers, it’s a seismic change that could reshape how knowledge is valued online.

Both judges praised the technology as groundbreaking, which justified copying entire books. Anthropic’s Claude chatbot was described as “spectacularly transformative,” while the Meta Llama case was also found to be “highly transformative,” though Judge Chhabria noted his ruling depended on the specific record presented.
Because these models synthesize new responses rather than simply regurgitate text, courts found this qualified as fair use.
The rulings set a precedent that transformative AI outputs may override traditional copyright protections, sparking concern among authors, artists, and publishers.

The companies didn’t exactly get their training data from pristine sources. Anthropic and Meta relied on unauthorized digital libraries, sometimes even torrents, to build massive datasets.
Yet the courts still sided with them, emphasizing how the purpose of creating new tools outweighed how the material was acquired.
This distinction may haunt future lawsuits, as plaintiffs argue that illegal sourcing should disqualify fair use claims in AI training.

Authors argued that AI flooding the internet with derivative content destroys demand for original work. But the judges disagreed, stating there’s no recognized market entitlement to license works for AI training.
They also noted that publishers often refused or failed to respond to licensing requests. Essentially, because no functioning market exists for this, Big Tech’s bypass didn’t break the law. However, future lawsuits could still test this line of reasoning.

Judge Chhabria floated a powerful new idea: AI’s capacity to churn out countless competing works could dilute the incentive for humans to create. He called this “market dilution,” a potential threat to copyright’s very purpose.
Though not decisive here, this theory could reshape future cases, especially if creators can show AI is directly undermining their livelihood. Expect it to become a significant argument in the next legal wave.

These rulings embolden AI leaders to treat publicly available content as fair game. Publishing online, such as photos, blogs, or research, may already be feeding machine learning models.
For Big Tech, this massive win saves billions in licensing fees. For creators, it’s a reckoning over whether openness online is sustainable when AI can repackage everything without paying or crediting the source.

To fight back, Cloudflare launched tools forcing AI companies to pay for scraping. Instead of asking creators to opt out of training data, the default is shifting to opt-in licensing. Publishers like The Atlantic and Time have already signed on.
This could spark a significant push for compensation frameworks. It might restore balance by ensuring creators share the value AI extracts from their work if successful.

Many publishers are rethinking their entire digital strategies. Bloomberg keeps articles locked behind the Terminal. Ben Thompson relies on gated newsletters. Even Microsoft launched a print-only magazine to avoid scraping.
As AI gobbles up open content, creators may withdraw behind paywalls or stop publishing publicly. Ironically, the AI models it helped birth threaten the internet’s openness.

Although the outcomes were identical, the reasoning varied sharply. One judge focused on how transformative the AI outputs were. The other emphasized whether plaintiffs proved real market harm.
This split leaves legal uncertainty that higher courts or new legislation could eventually resolve. Companies and creators are watching closely to see which interpretation becomes dominant in future rulings.

These are just the first rulings in a long line of copyright battles. More than 40 cases are still being handled by courts across the U.S., targeting every major AI company, from Google to OpenAI.
Plaintiffs range from individual writers to giants like The New York Times. One adverse decision could undo these wins and force sweeping changes in how AI systems are trained.

Legal experts argue these cases weren’t decisive because plaintiffs were outmatched. They couldn’t produce the expert evidence judges wanted without funding and time.
But the next wave will feature major publishers, music labels, and studios with deep pockets. With better preparation, plaintiffs could craft stronger arguments that directly challenge fair use claims and push courts to reconsider these early victories.

Judges made clear their decisions didn’t resolve everything. They left the door open for claims if AI outputs closely mimic copyrighted work. New cases alleging direct replication will test whether courts are willing to draw harder lines.
This is where the real battle over AI’s impact on creators and publishers will likely be fought and possibly won or lost.

The courts’ mixed messages could push Congress to step in. Lawmakers may create clear guidelines about what’s allowed or develop compensation systems.
This could resemble music licensing for streaming services, where rights holders are paid for usage. Until legislation appears, AI companies will continue operating in this legal gray area while creators lobby for stronger protections.

Europe is already leaning toward stricter protections under the EU AI Act. AI developers could face incompatible laws if U.S. courts keep siding with Big Tech while Europe cracks down.
Global companies may need separate compliance strategies, driving up costs and complicating innovation. The transatlantic legal divergence could become a significant hurdle for the next generation of AI products.

The idea that all public content is “free for AI” risks draining the open web of its value. Creators who feel exploited may withdraw behind closed platforms or stop publishing entirely.
This trend could create a scarcity of quality content, undermining the internet’s role as a public knowledge resource. Ironically, AI may end up eroding the foundation it depends on.
Want to see how creators are fighting back? Check out how Disney and Universal are taking on Midjourney.

Despite these courtroom victories, this debate is far from settled. Appeals, new lawsuits, and public pressure will keep testing boundaries.
Whether you’re a creator, technologist, or policymaker, the question remains: How do we balance innovation with respect for human effort? Everyone has a stake in the outcome because it will define how culture, knowledge, and creativity evolve in the AI era.
I’m curious to know how others are pushing back. See why Cloudflare wants AI companies to pay up here.
What do you think about the battle between AI and Copyright? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!