Was this helpful?
Thumbs UP Thumbs Down

ChatGPT therapy chats could be used in court, says Sam Altman

Sam Altman with a blurred logo of ChatGPT in the background
Chatgpt logo displayed on phone.

AI chats may not be as private as you think

If you’re pouring your heart out to ChatGPT, think twice. OpenAI CEO Sam Altman has warned that AI therapy-style conversations aren’t legally protected like doctor-patient or attorney-client chats.

That means, in a legal case, your sensitive conversations with ChatGPT could be used as evidence. It’s a jarring reminder that what feels like a private exchange with a chatbot is anything but, especially in the eyes of the law.

OpenAI CEO Sam Altman attends and addresses a conference.

OpenAI’s CEO admits the privacy gap exists

During a podcast with Theo Von, Sam Altman openly acknowledged that current laws don’t protect AI chats like they do with therapists.

He called it “very screwed up” that ChatGPT users could be compelled to have their emotional or personal disclosures used in court.

It’s rare for a CEO to call out their product’s legal vulnerabilities, but Altman believes this is an urgent issue policymakers must confront.

female doctor and male patient discussing current health examination while

Legal privilege does not apply to AI chats

When you talk to a real therapist, doctor, or lawyer, your privacy is safeguarded by legal privileges. Those protections don’t exist for AI. There is no therapist-client confidentiality for ChatGPT, even if the interaction feels just as personal.

That legal distinction could mean a safe space and a subpoena waiting to happen. For now, AI lacks the legal framework to promise true confidentiality.

American Lawsuit

Lawsuits could force your chats into court

Altman was blunt: if a lawsuit happens, OpenAI could be legally compelled to produce ChatGPT conversations even if they’re deeply personal. Current U.S. laws don’t recognize AI chat logs as protected communications.

This risk isn’t hypothetical, either. OpenAI is already facing legal demands to retain and turn over user conversations in active lawsuits, like the ongoing New York Times copyright case.

A person is using AI driven chatbot on a mobile phone

Deleted chats may still be retrievable

Think deleting your chats makes you safe? Not entirely. OpenAI says it deletes free-tier chats after 30 days, but retains the right to keep them longer for legal or security purposes.

Courts could order OpenAI to preserve or recover those logs in active legal proceedings. In one case, plaintiffs have requested that all ChatGPT user logs, including deleted ones, be retained. So deletion isn’t a guarantee of disappearance.

Teenage group of friends sitting on stairs and chatting smartphone

More users are treating ChatGPT like therapy

Altman revealed that an increasing number of people, especially young users, are turning to ChatGPT for mental health support, life coaching, and relationship advice.

It’s convenient, nonjudgmental, and always available. But this growing behavior brings ethical questions: Should AI offer emotional guidance without legal protections or clinical expertise? And what happens when personal stories become part of a training dataset?

WhatsApp app displayed on a phone

AI chats can be accessed by OpenAI staff

Unlike end-to-end encrypted services like Signal or WhatsApp, ChatGPT conversations are visible to OpenAI staff under certain conditions. These logs may be reviewed for misuse detection or training improvements.

That means what you type might be seen not only by AI but also by humans. If that makes you uncomfortable, you’re not alone; transparency around who can see what remains murky.

Privacy text on keyboard button internet privacy concept

There’s no federal AI privacy law yet

The U.S. currently lacks comprehensive federal laws protecting AI chat privacy. Policies differ state by state, creating a patchwork of unclear guidelines. This regulatory vacuum leaves companies to self-police, and users exposed.

While laws about health data and legal confidentiality exist, none apply to generative AI models yet. Altman’s appeal to lawmakers is a plea for help before things spiral.

Gavel in the court room and working office of lawer legislation

Lawmakers agree this needs urgent attention

Altman said the policymakers he’s spoken to understand the seriousness of the issue. He urged immediate legislation to define privacy standards for AI tools.

The stakes are high not just for users seeking therapy-like help, but for the broader trust in AI systems.

If users can’t be sure their conversations are protected, they’ll stop using AI for anything personal. That’s a future that OpenAI and society want to avoid.

ChatGPT OpenAI chat bot on phone screen with on going chat

ChatGPT can regurgitate past user info

AI models have a known issue: sometimes, they echo past user content. This means that what you said in a supposedly private conversation could resurface in another user’s session, especially if training data isn’t entirely scrubbed.

Researchers have documented instances of AI regurgitating identifiable information. It’s one of the creepiest and most pressing technical problems in the AI space today.

stanford university main quad

AI advice can reflect harmful mental biases

A Stanford study found that AI therapy bots, including ChatGPT, often mishandle sensitive mental health queries.

The research showed that bots reinforce stigma, especially around conditions like schizophrenia or substance use.

Unlike trained therapists, AI lacks empathy, nuance, and the ability to safely guide someone through a crisis. That makes its growing use as a substitute for therapy problematic and sometimes dangerous.

Smartphone screen displaying various AI applications.

AI models may treat conditions unequally

Stanford’s study highlighted that chatbots treated certain conditions more favorably than others. Depression, for example, was handled with more care than disorders like bipolar or PTSD.

That kind of inconsistency doesn’t align with medical ethics. In therapy, patients expect equal and nonjudgmental treatment.

But AI isn’t bound by clinical codes, at least not yet, and its responses can reflect broader societal biases.

Security concept

Altman believes in privacy for AI talks

Despite the legal and technical hurdles, Sam Altman supports creating new privacy protections for AI-based conversations.

“We should have the same concept of privacy for your conversations with AI that we do with a therapist,” he said.

That’s a powerful statement and a blueprint for what responsible AI policy could look like. Whether lawmakers agree or act remains to be seen.

Girl using smartphone near decorated Christmas tree.

Even young children use ChatGPT emotionally

AI isn’t just an adult phenomenon. Millions of children and teens also turn to ChatGPT for companionship and emotional advice. This raises massive concerns around data privacy and emotional safety.

Kids may not understand the permanence or vulnerability of what they share. And without parental controls or stronger laws, their conversations are just as exposed as adults’.

AI law and AI ethics concept shown by judicial gavel and law icon legislation

This debate will only grow louder

These legal and ethical questions will only intensify as generative AI becomes more embedded in everyday life. From court cases to congressional hearings, AI privacy will be a defining issue of our time.

Sam Altman’s warning is just the beginning of a bigger reckoning that’s coming, one that asks: how much of ourselves are we willing to share with AI?

Want to see who’s already adapting to this new AI reality? Sam Altman thinks Gen Z might be leading the way.

Sam Altman with a blurred logo of ChatGPT in the background

Until laws catch up, stay cautious

Until new regulations arrive, the best advice is simple: treat ChatGPT like a public forum, not a private diary. Share only what you’d be comfortable reading in court or by a stranger.

Use ChatGPT for learning, brainstorming, or harmless fun. But when it comes to your deepest emotions or legal situations, your best confidant is still human.

Curious what ChatGPT can safely help with? Meet the new AI tools that work more like a clever intern than a confidant.

What do you think about ChatGPT’s CEO telling you not to share every personal detail with ChatGPT and not to feel it’s your therapist? Please share your thoughts and drop a comment.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.