8 min read
8 min read

Sam Altman has raised concerns about a new pattern among young people: an emotional dependence on ChatGPT.
At a recent Federal Reserve event, he said users are no longer just using AI for quick facts; they’re treating it like a trusted confidant. Many teens now consult ChatGPT before making personal decisions, trusting it with intimate life details.
Altman said this “feels bad” and flagged it as a cultural shift that could have dangerous long-term consequences if left unchecked.

Altman described how some users claim they “can’t make any decision” without telling ChatGPT everything going on.
For these young people, the AI doesn’t just answer questions, it knows their friends, understands their situations, and becomes a go-to source of advice.
While this level of trust may highlight AI’s effectiveness, Altman worries it replaces real-world relationships and decision-making with something too synthetic, potentially disrupting how people develop autonomy and emotional intelligence.

Recent studies back up Altman’s fears. A Common Sense Media report found that 72% of teens have used an AI companion, and over half say they trust its advice to some extent.
Younger teens show even more trust than older ones. Many adolescents turn to AI instead of parents, teachers, or peers for emotional guidance.
This emotional offloading to machines could shape how they relate to people and process feelings during key developmental years.

Even though ChatGPT can offer helpful and accurate suggestions, Altman emphasized that people should be cautious about letting it guide their lives.
Making collective decisions based solely on AI input “feels bad and dangerous.” It’s not about rejecting AI’s help; somewhat, it’s about ensuring that critical thought, individual values, and human judgment still drive our choices, not an algorithm.
He believes overdependence could subtly erode personal responsibility and independent thinking.

Altman was honest during the conference, he said, OpenAI is still trying to figure out how to address this growing emotional reliance.
While the company has championed AI accessibility and ease of use, it didn’t anticipate users forming such intense psychological bonds with the tool.
He acknowledged that even the best AI advice must be viewed through a human lens. His team is now exploring ways to limit overreliance, but balancing utility with emotional detachment is tricky.

According to experts, constant AI consultation can lead to emotional dependency, where users hesitate to trust their instincts. This kind of psychological crutch can suppress emotional growth and limit the development of resilience.
Altman’s concern is not that ChatGPT is powerful but that its power is being used in ways that bypass healthy human self-reflection.

Psychologists say people often treat AI like a friend, even though it’s just a predictive language model. Psychiatrist Dr. Joe Pierre described this as “deification” when users assign godlike knowledge or authority to chatbots.
This misplaced reverence can be dangerous, especially for individuals struggling with isolation or mental health issues.
Pierre warns that chatbots are not sentient beings, and treating them as such distorts expectations, fuels delusions, and might prevent users from seeking real, human-centered help.

Altman says the younger generation, particularly college students, uses ChatGPT like an “operating system.” They integrate it with complex prompts, linked documents, and contextual memory into their lives.
It’s not just a chatbot; it’s a personalized digital assistant that knows their history, goals, and concerns.
While this integration improves productivity, Altman says it also blurs boundaries between utility and emotional dependence, especially when users prioritize AI judgment over their decision-making instincts.

Altman identified a growing generational gap in how people use AI. Older users tend to treat ChatGPT like Google, using it for simple tasks or quick lookups.
In contrast, younger users build deep integrations and use the tool for introspection, life planning, and emotional reassurance. This disparity highlights a potential cultural shift.
For younger people, AI is becoming a mirror and mentor. Altman believes this trend requires thoughtful intervention before it replaces human connection entirely.
The more users turn to AI for emotional support, the greater the risk that it could alter how they perceive themselves.
Altman hinted that constant feedback from a non-human system might reshape identity formation, especially in younger users.
When people consult ChatGPT before forming opinions or reacting emotionally, they may struggle to differentiate between their authentic selves and the AI-informed version. Altman believes this could lead to losing authenticity and weakening personal boundaries.

In a quote that raised eyebrows, Altman shared how some users believe ChatGPT “knows them and their friends.” This suggests not just technical trust but emotional intimacy.
When a tool designed to simulate human conversation becomes emotionally fulfilling, it crosses a line. While it shows how advanced AI has become, it also reveals a growing void in users’ real-world connections.
Altman’s concern is that emotional validation from a machine might feel more satisfying than human interaction.

Despite ChatGPT’s conversational fluency, Altman is clear in saying it doesn’t truly understand us. It lacks emotions, lived experiences, or genuine empathy. Yet people still pour their hearts out to it.
Altman fears that users confuse fluency with insight, treating well-phrased advice as emotionally intelligent when it’s just statistical guesswork.
He wants users to remember that no matter how comforting AI responses feel, they aren’t coming from something that truly knows or cares about them.

Altman isn’t anti-AI. He openly acknowledges ChatGPT’s benefits, such as boosting creativity, saving time, and offering valuable suggestions. He even uses it himself.
But he insists that its power must be used wisely. The concern isn’t the tool but the blind trust people place in it.
When used alongside human judgment and emotional intelligence, ChatGPT is incredibly effective. But when it becomes the sole voice in a user’s life, that’s when its influence turns problematic.

There are alarming cases where excessive ChatGPT use has led to psychological distress. One story involved a man with autism who suffered a manic episode after obsessively using the bot.
He began believing he had divine powers. Altman and mental health experts say this is a real risk when chatbots become emotional anchors.
Without proper education or safeguards, vulnerable users may spiral. These incidents show the importance of separating helpful engagement from harmful immersion.

Altman said ChatGPT now handles over 1 billion daily messages and serves over 300 million weekly active users. With numbers this massive, the risk of emotional overuse becomes a public health concern.
He emphasized that OpenAI is responsible for shaping how the tool evolves, ensuring it remains a positive force. But he also reminded users that tools are only as safe as those using them. Education and awareness remain crucial.
And with so many relying on it daily, the next step is to have an even bigger memory. Here’s what Altman says ChatGPT could remember about you.

Altman’s final message is one of balance. He believes in AI’s power to make life better, but not to live life for us. Tools like ChatGPT should enhance our thinking, not replace it.
As AI becomes more integrated into everyday routines, the challenge will be ensuring people remain in control.
Altman urges users to keep asking questions, trust their judgment, and never forget that true wisdom, empathy, and purpose still come from being human.
And staying human means knowing what powers the tools we use. Here’s how much energy one ChatGPT query really takes.
What do you think about Sam Altman’s statement about showing worry for emotional attachment to ChatGPT?
Read More From This Brand:
Don’t forget to follow us for more exclusive content on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!