Was this helpful?
Thumbs UP Thumbs Down

Sam Altman claims ChatGPT has ‘mitigated’ mental health risks

Chatgpt chatbot concept
Sam Altman with a blurred logo of ChatGPT in the background

ChatGPT’s personality gets a new twist

OpenAI has been experimenting with ChatGPT’s personality since GPT-4 launched. The AI has swung between friendly and restrictive, sometimes too playful, sometimes too careful. Now, Sam Altman says there’s a major shift coming that could change how AI interacts with users.

This change focuses on balancing usefulness and safety. Users who felt limited by previous mental health safeguards might find ChatGPT more flexible. Altman emphasizes that the AI will still avoid serious mental health risks while offering a more human-like experience.

Rules concept with word on folder.

Mental health safeguards get smarter

Previously, OpenAI set strict limits to protect users’ mental health. ChatGPT would suggest breaks during long chats and avoid direct advice on complex personal issues. These rules were meant to prevent the AI from mishandling sensitive situations.

Altman now claims OpenAI has mitigated major mental health risks. This opens the door to loosening restrictions for general users. The goal is a ChatGPT that is safer but also more engaging, responding naturally without unnecessary limits.

Take action key on keyboard

Users gain more control

In the coming update, ChatGPT will let users shape its personality. Want a friendlier tone, more emojis, or a sassy vibe? Choose your style. The AI won’t force a style, but it will adapt to user preferences while keeping safety checks in place.

This move gives users the freedom to interact in ways they enjoy. It also addresses complaints that previous versions were too bland or restricted. OpenAI hopes this balance will make ChatGPT both safe and fun for everyone.

Word criticism written with wooden blocks business concept

Past criticisms shaped changes

OpenAI has faced scrutiny over ChatGPT’s handling of mental health. Critics pointed out that the AI sometimes dismissed serious concerns or failed to detect delusions. These flaws led to stricter mental health safeguards in earlier versions.

By learning from these issues, OpenAI is aiming for a smarter AI. The new safeguards should reduce missteps, letting ChatGPT offer conversational support without overstepping boundaries. This step shows the company is taking responsibility seriously.

Chatgpt chatbot concept

ChatGPT learns to step back

The AI now knows when to hold back. If a conversation hints at mental health struggles, ChatGPT is designed to slow down, provide cautious guidance, or encourage a break. This approach avoids giving risky advice while still staying helpful.

For users, this means safer interactions without feeling overly restricted. Altman calls it a “mitigation” of mental health risks, showing the company has thought carefully about striking the right balance between support and safety.

ChatGPT language models

Personality updates teased

Altman hinted that future ChatGPT versions will resemble what users liked in earlier releases. The AI may become more expressive, friendly, or even playful, depending on user choice. These updates aim to bring the AI closer to human-like conversation.

This tweak could make ChatGPT feel more natural and enjoyable. By letting users customize the experience, OpenAI hopes to improve engagement while keeping safeguards intact, avoiding past mistakes.

Restriction concept words

Restrictions aren’t gone entirely

Even with a relaxed approach, OpenAI keeps some limits in place. Sensitive topics and mental health issues remain guarded. The AI won’t offer advice on complicated personal problems unless it’s safe to do so.

This ensures users still benefit from safety protections. The relaxed rules mainly apply to general conversations, so ChatGPT becomes more interactive and expressive without risking harm in critical situations.

OpenAI Sora displayed on a phone

Sora 2 model shows success

OpenAI’s Sora 2 model allows users to create artistic-style images and creative visuals under strict content policies. This feature has boosted ChatGPT’s popularity by offering creative, engaging options not widely available among competitors.

The positive response suggests that giving users more freedom can work well. It also sets a precedent for the new personality flexibility, showing that careful innovation can attract attention without compromising safety.

18+ written on blackboard

Age restricted content update gains attention

OpenAI plans to roll out more adult content through verified age-gated accounts. This move is part of treating adult users responsibly and letting them explore content safely.

While controversial, it could make ChatGPT more appealing to adults. The company hopes this controlled freedom will help distinguish it from competitors that are more restrictive in these areas.

Close up detail of the scales of justice.

Balancing risk and reward

Relaxed rules carry potential risks, allowing expressive behavior and adult content might spark controversy or misuse. At the same time, it increases engagement and popularity for ChatGPT.

OpenAI is betting that careful monitoring and advanced safeguards will make the benefits outweigh the risks. The company seems confident that users will enjoy the flexibility and safety.

Person writing in diary with learning and other related digital icons over it.

Learning from past mistakes

The AI’s previous strict limits were lessons in restraint. OpenAI recognized that overly restrictive behavior frustrated many users while still not fully preventing mental health risks.

By iterating based on feedback, OpenAI aims to make ChatGPT smarter and more adaptable. Each update improves the balance between caution and enjoyment, reducing missteps from prior releases.

ChatGPT OpenAI chat bot on phone screen with on going chat

Customizable chat styles

ChatGPT will let users pick from various personality styles. Whether casual, playful, or professional, the AI adapts accordingly, using emojis and tone to match the preference.

This personalization makes the experience feel more human. Users can interact naturally without being boxed in by previous rigid restrictions. OpenAI hopes this feature keeps people engaged longer.

Businessman man official holding red prohibition sign no over boxes.

Age gating ensures safety

The upcoming features rely on age verification. Adults can access broader content, while younger users remain protected. This system maintains a safe environment while still offering flexibility.

Age-gating helps OpenAI manage risk responsibly, calibrating features by maturity level. It shows the company’s commitment to protecting vulnerable users without stifling adult engagement, consent, or creative freedom across products.

A concept of a woman is using ChatGPT chatbot

AI gets more expressive

Altman emphasizes that ChatGPT can now express itself like a real conversation partner. From jokes to emojis, the AI can match a user’s style for a more enjoyable chat.

This makes interactions feel personal and lively. It also encourages experimentation with tone, letting users explore different conversational moods safely, without pressure, stigma, or confusion, fostering confidence and creative play.

Sam altman and OpenAI logo.

OpenAI’s cautious optimism

While excited about updates, OpenAI remains aware of potential pitfalls. The company is taking a careful, step-by-step approach to expand ChatGPT’s capabilities without causing harm.

This measured strategy highlights responsibility and innovation together. Users get a more engaging AI while OpenAI ensures mental health safeguards remain effective.

Can AI really replace a therapist? Discover why clients are pushing back and what they say AI can, and can’t, understand about human emotions.

Key takeaways phrase on a yellow page.

ChatGPT evolves with users

ChatGPT’s personality updates reflect OpenAI’s responsiveness to user feedback. Users get more control, flexibility, and fun interactions while mental health risks stay mitigated.

The updates mark a balance between safety and enjoyment, demonstrating how AI can adapt responsibly.

Think ChatGPT is always right? A former researcher reveals surprising truths that might change how you trust AI. See what they discovered.

Are you curious how far ChatGPT can go with personality tweaks? Join the conversation in the comments and see how these changes feel in your chats.

Read More From This Brand:

Don’t forget to follow us for more exclusive content on MSN.

If you liked this story, you’ll LOVE our FREE emails. Join today and be the first to get stories like this one.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.