7 min read
7 min read

George R. R. Martin has been open about his surprise and frustration after learning ChatGPT could generate detailed story outlines set in the world he created.
When prompted, ChatGPT generated an alternative sequel outline to A Clash of Kings titled, ‘A Dance With Shadows’ that included new lore, characters, and a dragon magic element, which plaintiffs submitted as an exhibit in the case.
Martin and other plaintiffs argue that the similarity between the ChatGPT output and their books supports the allegation that their works were used during model training without authorization.

When prompted, ChatGPT created an entirely new installment in the A Song of Ice and Fire universe titled A Dance With Shadows. It introduced a new Targaryen relative, an ancient dragon magic system, and a rogue sect of the Children of the Forest.
The outline was cited in plaintiffs’ exhibits and in the court’s opinion as an example of an output that a reasonable jury could consider substantially similar to protected elements of Martin’s works.

What began as a surprising AI experiment quickly escalated. Plaintiffs argued that such outputs support an inference that copyrighted works were used to train the model without authorization, while defendants counter that model training involves lawful uses and may be protected by the fair use doctrine.
Judge Sidney Stein agreed that a reasonable jury could find the AI’s outputs substantially similar to Martin’s work. This ruling didn’t end the case, but it opened the gate for a full trial, marking one of the most consequential copyright battles in the AI era.

Martin’s case isn’t isolated. It sits alongside a growing number of lawsuits filed by authors who argue OpenAI and Microsoft used copyrighted books without permission.
Writers such as Sarah Silverman, Ta-Nehisi Coates, and Michael Chabon, among others, say that AI systems generate summaries and spin-offs that closely mirror their works.
In 2023, this movement accelerated when creators began noticing that chatbots could produce detailed plot outlines, character arcs, and prose that appeared rooted in their copyrighted books.

Lawyers pursuing the case now have three separate avenues to win. The first argues that using copyrighted books during training itself constitutes infringement.
The second claim is that AI companies scraped pirated shadow libraries, which would add a new layer of liability.
The third centers on the outputs themselves, arguing that the content generated by ChatGPT is too similar to the original works. Any one of these arguments could be enough to secure damages that reach into the hundreds of millions of dollars.

A key moment in the broader debate emerged when courts began focusing less on the intentions behind AI training and more on whether the technology can produce outputs that resemble copyrighted material too closely.
This shift reflects growing legal attention on how generative models use and reproduce creative works. It highlights the uncertainty surrounding how copyright rules will be applied to rapidly advancing AI systems.

Fair use traditionally permits the use of small portions of copyrighted works for commentary, research, or educational purposes. But applying fair use to AI training is far more complicated.
AI systems ingest enormous datasets and compress patterns into their neural layers, making it unclear how much of a book is “copied.”
Courts now face the challenge of determining whether this type of ingestion constitutes transformation or unauthorized replication, and Martin’s case may become a landmark ruling for future AI regulation.

One of the more explosive claims in the suit is that OpenAI and other firms utilized pirated book repositories, such as LibGen and Bibliotik, to build their training datasets.
Internal conversations obtained in discovery reportedly include messages discussing whether to delete problematic datasets.
If proven true, this could shift the case from a technical copyright dispute to a question of intentional infringement, dramatically increasing potential penalties under federal law.

Recent high-profile copyright disputes involving AI companies have heightened expectations around how courts may handle cases brought by authors like George R. R. Martin.
These legal battles signal that regulators and judges are increasingly attentive to whether AI developers properly license creative works used in training datasets.
As more cases progress, many observers anticipate that financial penalties and licensing requirements could become a significant part of future rulings.

Plaintiffs have advanced several legal theories and could prevail if they prove liability on any one of them at trial, but each theory presents different legal standards and defenses that must be resolved through litigation.
If they prove pirated works were knowingly ingested, they win. If they confirm the outputs are too similar to the originals, they also win. The flexibility of the case makes it one of the most dangerous legal threats OpenAI has faced so far.

The ChatGPT outline has been cited in the litigation and by commentators as an emblematic example of plaintiffs’ concern that generative AI can produce derivative content that echoes the structure and themes of copyrighted works.
The example was so striking that it became central evidence in the case, providing the judge with a concrete way to examine the similarities rather than relying solely on hypotheticals. It was a rare moment where AI output directly shaped a legal argument.

For readers who have waited over a decade for The Winds of Winter, the fact that an AI could produce a sequel outline has sparked everything from jokes to concern.
Some fans argue the lawsuit is justified, while others see it as a sign of how dramatically storytelling is about to change.
The situation highlights a strange cultural moment where technology intersects with one of the most followed fantasy universes of all time.
Curious how AI might be reshaping the internet itself? Take a look at why some say it may be closer to “dead” than we think here.

As the lawsuit moves toward trial, it is becoming increasingly clear that Martin’s stance against AI will be part of his lasting legacy.
He built one of the most influential fantasy worlds of the last century, and now he is helping define how future technologies treat creative ownership. Whether he wins or loses, this case will echo through publishing, entertainment, and AI regulation for years.
It also reinforces the notion that even unfinished stories can have real legal implications in the age of artificial intelligence.
Curious how AI tools may change as users push them over time? Learn more about why ChatGPT’s safety guardrails might weaken with long-term use here.
What do you think about George RR Martin taking action against ChatGPT while it’s writing the sequel to Game of Thrones and violating copyright? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!