7 min read
7 min read

If you’re pouring your heart out to ChatGPT, think twice. OpenAI CEO Sam Altman has warned that AI therapy-style conversations aren’t legally protected like doctor-patient or attorney-client chats.
That means, in a legal case, your sensitive conversations with ChatGPT could be used as evidence. It’s a jarring reminder that what feels like a private exchange with a chatbot is anything but, especially in the eyes of the law.

During a podcast with Theo Von, Sam Altman openly acknowledged that current laws don’t protect AI chats like they do with therapists.
He called it “very screwed up” that ChatGPT users could be compelled to have their emotional or personal disclosures used in court.
It’s rare for a CEO to call out their product’s legal vulnerabilities, but Altman believes this is an urgent issue policymakers must confront.

When you talk to a real therapist, doctor, or lawyer, your privacy is safeguarded by legal privileges. Those protections don’t exist for AI. There is no therapist-client confidentiality for ChatGPT, even if the interaction feels just as personal.
That legal distinction could mean a safe space and a subpoena waiting to happen. For now, AI lacks the legal framework to promise true confidentiality.

Altman was blunt: if a lawsuit happens, OpenAI could be legally compelled to produce ChatGPT conversations even if they’re deeply personal. Current U.S. laws don’t recognize AI chat logs as protected communications.
This risk isn’t hypothetical, either. OpenAI is already facing legal demands to retain and turn over user conversations in active lawsuits, like the ongoing New York Times copyright case.

Think deleting your chats makes you safe? Not entirely. OpenAI says it deletes free-tier chats after 30 days, but retains the right to keep them longer for legal or security purposes.
Courts could order OpenAI to preserve or recover those logs in active legal proceedings. In one case, plaintiffs have requested that all ChatGPT user logs, including deleted ones, be retained. So deletion isn’t a guarantee of disappearance.

Altman revealed that an increasing number of people, especially young users, are turning to ChatGPT for mental health support, life coaching, and relationship advice.
It’s convenient, nonjudgmental, and always available. But this growing behavior brings ethical questions: Should AI offer emotional guidance without legal protections or clinical expertise? And what happens when personal stories become part of a training dataset?

Unlike end-to-end encrypted services like Signal or WhatsApp, ChatGPT conversations are visible to OpenAI staff under certain conditions. These logs may be reviewed for misuse detection or training improvements.
That means what you type might be seen not only by AI but also by humans. If that makes you uncomfortable, you’re not alone; transparency around who can see what remains murky.

The U.S. currently lacks comprehensive federal laws protecting AI chat privacy. Policies differ state by state, creating a patchwork of unclear guidelines. This regulatory vacuum leaves companies to self-police, and users exposed.
While laws about health data and legal confidentiality exist, none apply to generative AI models yet. Altman’s appeal to lawmakers is a plea for help before things spiral.

Altman said the policymakers he’s spoken to understand the seriousness of the issue. He urged immediate legislation to define privacy standards for AI tools.
The stakes are high not just for users seeking therapy-like help, but for the broader trust in AI systems.
If users can’t be sure their conversations are protected, they’ll stop using AI for anything personal. That’s a future that OpenAI and society want to avoid.

AI models have a known issue: sometimes, they echo past user content. This means that what you said in a supposedly private conversation could resurface in another user’s session, especially if training data isn’t entirely scrubbed.
Researchers have documented instances of AI regurgitating identifiable information. It’s one of the creepiest and most pressing technical problems in the AI space today.

A Stanford study found that AI therapy bots, including ChatGPT, often mishandle sensitive mental health queries.
The research showed that bots reinforce stigma, especially around conditions like schizophrenia or substance use.
Unlike trained therapists, AI lacks empathy, nuance, and the ability to safely guide someone through a crisis. That makes its growing use as a substitute for therapy problematic and sometimes dangerous.

Stanford’s study highlighted that chatbots treated certain conditions more favorably than others. Depression, for example, was handled with more care than disorders like bipolar or PTSD.
That kind of inconsistency doesn’t align with medical ethics. In therapy, patients expect equal and nonjudgmental treatment.
But AI isn’t bound by clinical codes, at least not yet, and its responses can reflect broader societal biases.
Despite the legal and technical hurdles, Sam Altman supports creating new privacy protections for AI-based conversations.
“We should have the same concept of privacy for your conversations with AI that we do with a therapist,” he said.
That’s a powerful statement and a blueprint for what responsible AI policy could look like. Whether lawmakers agree or act remains to be seen.

AI isn’t just an adult phenomenon. Millions of children and teens also turn to ChatGPT for companionship and emotional advice. This raises massive concerns around data privacy and emotional safety.
Kids may not understand the permanence or vulnerability of what they share. And without parental controls or stronger laws, their conversations are just as exposed as adults’.
These legal and ethical questions will only intensify as generative AI becomes more embedded in everyday life. From court cases to congressional hearings, AI privacy will be a defining issue of our time.
Sam Altman’s warning is just the beginning of a bigger reckoning that’s coming, one that asks: how much of ourselves are we willing to share with AI?
Want to see who’s already adapting to this new AI reality? Sam Altman thinks Gen Z might be leading the way.

Until new regulations arrive, the best advice is simple: treat ChatGPT like a public forum, not a private diary. Share only what you’d be comfortable reading in court or by a stranger.
Use ChatGPT for learning, brainstorming, or harmless fun. But when it comes to your deepest emotions or legal situations, your best confidant is still human.
Curious what ChatGPT can safely help with? Meet the new AI tools that work more like a clever intern than a confidant.
What do you think about ChatGPT’s CEO telling you not to share every personal detail with ChatGPT and not to feel it’s your therapist? Please share your thoughts and drop a comment.
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!