Was this helpful?
Thumbs UP Thumbs Down

Google under fire for non existent AI generated news article

Fake news on tablet computer
Google sign in front of a building in New York

Google caught making up news

Google’s AI has produced authoritative-looking answers with fake headlines and links, sometimes crediting major outlets. Incidents include a Blackburn defamation case where Gemma fabricated sources; lawsuits followed, and Google promised fixes for hallucinations.

This has opened a new front in digital misinformation. These models are not only getting facts wrong. They are inventing entire stories with fake journalistic credibility.

Readers think they are accessing trustworthy news when, in reality, the AI is conjuring fiction designed to look like real reporting.

In this photo illustration of fake news is displayed

Citations that never actually existed

Imagine looking up a public figure and reading what looks like investigative journalism exposing crimes. The formatting and URLs resemble mainstream outlets.

Everything feels genuine until you notice every link returns a blank page. Users are discovering that the headlines, authors, and sources are completely fabricated.

This is more than a tech glitch. It is a system confidently presenting fiction as fact. That false authority can push people to believe harmful claims without ever doubting the accuracy of the source.

Gemini logo on a mobile screen while Google in the background

Victim accused by invented crimes

Conservative activist Robby Starbucks says he became a target of this dangerous behavior. Google’s Bard and Gemini accused him of horrifying actions like sexual assault, child abuse, financial wrongdoing, and even white nationalist Richard Spencer.

They cited specific stories that never existed anywhere outside the AI output. Strangers confronted Starbucks, believing the fake reports were genuine. Business connections questioned him in professional spaces.

The lawsuit argues that this is not a minor hallucination. It is character destruction delivered by a machine that speaks with credible confidence.

Fake news on tablet computer

Fake news with real fallout

The problem escalates when people take these fabricated reports at face value. False allegations start spreading through conversations, inboxes, and online spaces before anyone even checks if the article is real.

By the time someone questions it, the rumor has already moved fast, reached more people, and become harder to contain or correct.

The lawsuit claims that these AI-generated lies had real-world consequences for Starbucks. Harm to reputation offline has become as dangerous as anything online, especially when trusted tools generate believable misinformation automatically.

Google sign on the wall of the Google office building.

Google aware of hallucination issues

Google acknowledges that hallucinations are a known problem for large language models. The company says the issue affects the entire industry. Critics argue that admission no longer feels like a sufficient response since the stakes have grown much higher.

When the misinformation involves criminal acts or harmful accusations, saying everyone’s models hallucinate does little to reassure those who could be impacted in real life.

Judge holding a gavel.

Lawsuits push tech accountability forward

Starbucks’ lawsuit adds pressure on tech companies already facing scrutiny for AI safety. He previously settled a case against Meta related to similar AI-generated defamation.

Legal observers say these actions are creating a path to hold platforms responsible for the harm produced by their models.

The fear for companies is that these cases will define the boundaries of what counts as negligence or misconduct in a world where machines produce statements that sound like facts.

Deepfake hoax false and AI manipulation social media

Scholars warn of legal exposure

Law professor Eugene Volokh notes that continuing to show false content after being informed it is inaccurate could demonstrate actual malice. That is the legal threshold that determines whether a defendant can be held liable for defamation in cases involving public figures.

If courts agree that AI systems can meet that standard, tech companies might soon face the same level of responsibility as news publishers.

Misinformation text on sticky notes isolated on office desk

New era of harmful misinformation

AI-powered errors are not like old-school rumors or gossip. They carry an air of credibility that is hard to challenge. When a chatbot cites familiar media outlets, users believe the claim because it feels backed by reputable journalism.

No human conspiracy or troll campaign is needed. The machine handles every step of inventing a lie, spreading it, and giving it credibility.

AI hallucination displayed on a phone.

Entire industry facing bigger challenges

Google’s situation mirrors broader issues with AI models rushing from labs to everyday use. Companies want to innovate quickly, which sometimes means shipping products that still struggle with accuracy. Legal battles could force the industry to slow down and focus on reliability first.

The trend suggests a future where tech firms must prove their products are safe before they reach millions of users.

Deepfake generating fake news on socialcables media

Fake content looks very believable

One of the most alarming parts is how normal everything looks. The layout, dates, and writing style match real digital news. To an everyday reader, nothing seems suspicious until the page fails to load.

This new style of misinformation can bypass traditional detection because the system imitates the exact format of trustworthy reporting.

Stressed out businessman with downward business graphs failure stock market

Reputation damage increasingly weaponized online

Starbucks’ lawsuit says strangers treated him like someone accused in major media investigations. That perception could harm personal relationships, career opportunities, and long-term credibility. Once society believes a lie, undoing it requires far more than proving the truth.

Digital rumors can become permanent, especially when credible-looking technology helped deliver them.

Journalist media interview press conference

Journalism threatened by algorithmic fiction

Real journalists rely on fact-checking and editorial oversight. AI tools replicate the appearance of journalism without any verification. The lawsuit highlights how that threatens the value of real reporting and the work of professionals who protect accuracy.

If users cannot tell the difference between careful reporting and algorithmic fiction, trust in media could erode even further.

Portrait of a woman questioning.

Users must question machine claims

Anyone asking AI for information should pause before repeating claims as if they are verified. The case shows that convincing answers might be total fiction. If the accusations target a real person, believing the output could contribute to serious harm.

Everyone must adopt a careful mindset when interacting with chatbots. Blind trust no longer fits the moment.

Is Google keeping users safer or getting too aggressive with its AI crackdowns? Read more about Google AI shutting down 39 million ad accounts.

Top view of wooden cubes with words fake and fact

Protecting truth from automation

The Robby Starbucks lawsuit challenges whether AI companies can continue operating while allowing their products to invent believable lies.

Google is now under pressure to prevent harm when its systems hallucinate. The decision in this case could shape how accountability works in the age of machine-generated information.

Before accepting anything an AI says, users should ask the simplest question: Does this source actually exist, or is the machine just making everything harder to trust?

Is this the future of classrooms or another tech experiment? Explore why Google’s AI believes textbooks can be smarter.

Does this make you trust AI news less, or do you think it is an honest mistake? Comment your take and tap like.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.