Was this helpful?
Thumbs UP Thumbs Down

Meta AI leaked chatbot chats to users who weren’t supposed to see them

london uk  march 29th 2018 the homepage of the
Facebook and Meta logo metaverse concept background

What exactly did Meta AI leak?

Meta confirmed that its AI assistant mistakenly exposed private chatbot conversations to unrelated users. These included prompts and AI-generated responses, but no personal user data, like names or contact info, was compromised.

The leaked content involved messages typed to Meta AI via Facebook, Messenger, or Instagram. Users who had nothing to do with the original chat saw random conversations in their AI history tab. Meta acknowledged the issue quickly and began a cleanup operation within hours of detection.

july 10th image of july 10 calendar on wooden background

When did the leak happen?

The security flaw was discovered and reported in late December 2024, and Meta deployed a fix on January 24, 2025. The issue became widely reported in mid‑July 2025, after media described how the flaw had allowed exposure of private prompts and responses.

Meta responded within a day, temporarily turning off the “recent chats” feature for its Meta AI tool while engineers investigated the source of the glitch. Meta’s engineering team stated the incident was isolated and did not persist beyond a short window.

Whatsapp icon displayed.

Which platforms were affected?

The glitch affected Meta AI on three of the company’s most popular platforms: Facebook, Messenger, and Instagram. All three host the Meta AI assistant, which users interact with through search bars or chat.

Those who had previously interacted with the assistant on any of these platforms may have seen unfamiliar AI responses or prompts in their chat history. WhatsApp, which belongs to Meta, remained unaffected by the issue, as it uses different systems for chatbot interactions.

users enhance cyber protection by typing secure login credentials incorporating

Was private user data exposed?

According to Meta, no personally identifiable information, such as names, phone numbers, or private account details, was exposed during the incident.

The leak only involved chatbot conversations, which means only the text exchanges with the AI, both prompts and generated replies, were visible to other users. While this is still a privacy breach, Meta clarified that the scope was limited to AI interaction logs and not deeper personal data stored on user accounts or servers.

Twitter X brand renewal verified logo

How did users discover the issue?

Users discovered the problem through the AI history tab, where they noticed chatbot interactions they hadn’t initiated. Many shared screenshots showing odd or out-of-context queries and responses attributed to Meta AI.

These included questions about personal experiences or interests not belonging to the viewing user. The visibility of these logs raised serious concerns about how Meta AI stores and labels conversations, prompting immediate complaints across Reddit and X (formerly Twitter).

Data breach concept with faceless hooded male person.

Meta’s immediate response to the breach

Upon discovering the issue, Meta turned off the “recent chats” feature within the Meta AI system to prevent further exposure. A Meta spokesperson confirmed that the company’s engineering team was actively investigating the glitch and had contained the problem within hours.

The company issued a brief statement acknowledging the visibility error, apologizing for the confusion, and reassuring users that no sensitive personal information had been included in the mistakenly shared chat logs.

System hacked warning alert on laptop

What caused the chatbot history bug?

Meta described the issue as a “bug” in the system’s memory feature that incorrectly linked chat histories. Specifically, the glitch appears to have been caused by a misalignment in how the assistant indexed past conversations.

Instead of linking a history entry to the correct user, the system cross-referenced unrelated chats. Meta has not disclosed whether the bug was introduced during a recent update or existed longer undetected, but they’ve confirmed that the faulty logic has since been corrected.

Instagram music concept

Who was most impacted by the error?

The issue primarily impacted users in regions where Meta AI is fully deployed, including the United States. Most reports came from individuals who had recently engaged with Meta AI on Instagram or Messenger.

Since the feature is still being rolled out gradually in some areas, the breach affected a limited but significant number of people. Casual users who rarely interact with Meta AI were less likely to notice or be affected by the error during the glitch period.

Concept of a hacker using cellphone.

Could users delete leaked chats?

After discovering the leak, Meta temporarily disabled access to recent Meta AI chats. Once the issue was resolved, users could view their correct history again. However, Meta has not publicly stated whether users could manually delete the mistakenly displayed conversations before the fix.

It’s assumed that the system removed incorrect data during its internal cleanup. Still, users were not given specific tools or prompts to report or delete mismatched AI chats during the incident.

The concept answers to the questions.

Why this raises new privacy concerns

This glitch sparked concerns about how AI systems store and categorize user interactions. Even though no direct user data was leaked, the fact that private conversations, even with an AI, were misdirected to other users raised serious questions.

Users began wondering how securely Meta handles prompt storage and whether AI conversations can be considered truly private. This incident could influence ongoing conversations about AI transparency and the need for tighter internal safeguards on chat-based platforms.

Close-up of Meta AI

How Meta AI chat history works

Meta AI’s chat history lets users revisit past interactions with the assistant across different apps. When functioning correctly, the system ties each chat to the user who initiated it, using backend session identifiers.

These histories are meant to be visible only to the user, but the bug caused misrouting. This highlights a potential vulnerability in how conversation metadata is handled. Meta has not disclosed how long AI chats are retained or whether users can opt out of storing them.

london uk  march 29th 2018 the homepage of the

Regulatory pressure could increase

Following this breach, Meta could face renewed pressure from U.S. and European regulators to tighten AI safety measures. Authorities have been watching closely as tech giants expand AI capabilities, and any lapse, intentional or not, adds weight to calls for formal oversight.

Lawmakers may now push for clearer data retention policies, user transparency mechanisms, and stricter testing of AI memory features. This small leak could play into much larger debates about responsible AI deployment and user protection.

Meta logo displayed on a phone screen

What Meta promises going forward

Meta has pledged to improve internal testing and review processes related to AI features. The company says it’s revisiting how conversations are stored, indexed, and tied to accounts. While it hasn’t released a full audit, Meta is reassuring users that further safeguards are being implemented.

They’ve also committed to better alerting users when AI-related issues occur. This includes creating clearer communication channels to inform people if something like this happens again.

Partial view of man holding brick with privacy lettering over.

How to protect your AI chat privacy

Users concerned about privacy when using AI chat tools can take some precautions. Avoid sharing personal or sensitive information in conversations with any AI assistant, including Meta’s. Regularly reviewing and clearing chat histories when possible adds another layer of protection.

While Meta does not yet offer custom privacy controls for Meta AI chats, user demand after this incident may push the company to add options for deleting or managing AI interaction records more easily.

As you work to protect your AI chat privacy, Meta’s move to recruit top OpenAI talent shows just how valuable your data and AI knowledge have become.

Women interact with artificial intelligence

What this means for future AI tools

This incident shows that even large-scale AI systems are not immune to technical missteps. As more companies roll out chatbot features, the way these tools manage memory and user data will come under greater scrutiny.

The Meta AI leak is a reminder that convenience should never come at the cost of user trust. In the future, developers may need to build privacy-first systems by default and involve third-party testing to avoid unexpected exposures.

As Meta bets big on the future of AI tools, its CEO is offering blockbuster pay to top talent. See what that could mean for the industry.

Could this talent race redefine how AI tools evolve? Drop your thoughts in the comments.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.