6 min read
6 min read

OpenAI has recently launched new parental control features within ChatGPT aimed at safeguarding teen users. These tools give families more oversight over how teens use the AI chatbot. They come after concerns over teen mental health and a lawsuit related to a teen’s death.
The system allows parent-teen account linking while adding filters and restrictions. The move is part of broader efforts by OpenAI to improve safety. It reflects regulatory and public pressure to protect young users.

The launch follows troubling reports and legal actions claiming ChatGPT encouraged harmful behavior. Public and political scrutiny increased after a California teenager’s death. Experts, parents, and mental health advocates urged stronger safeguards.
OpenAI acknowledges the risks of unsupervised AI usage by teens. The features are designed to prevent exposure to harmful content or patterns. They aim to offer interventions when distress signals are identified.

Parents and teens can link their ChatGPT accounts through an invitation system. One party (parent or teen) sends a request, and the other accepts to complete linking. Once linked, accounts gain enhanced safety defaults.
This linkage allows parents to manage settings but not read all chat content. The teen must consent to the link, ensuring some agency. Unlinking is possible, but parents may be notified.

Teen accounts linked to parents receive stricter content filters by default. These include blocking explicit content, violent roleplay, content promoting extreme beauty ideals, and potentially risky viral challenge content.
These safety filters are built in so that teen users are protected from potentially harmful content. The goal is to reduce risk exposure without entirely locking down ChatGPT. Filters are enforced automatically upon account linking.

One of the new controls lets parents disable or limit ChatGPT’s memory of past chats. This prevents the AI from using prior conversations to personalize future replies in ways that might be risky.
Controlling memory reduces how much past behavior influences responses. It also helps limit unintended reinforcement of harmful patterns.
By default, if parents disable memory, teen users cannot re-enable it (unless the parent explicitly allows it).

Parents may disable voice mode (so ChatGPT does not respond via audio) and disable image generation or editing features as safety defaults.
These features are considered higher risk when used by underage users or unsupervised teens. Disabling them by default helps minimize possible misuse or exposure to inappropriate content.

Parental tools include “quiet hours” settings to block access during certain hours (like nighttime). This helps manage overuse or late-night usage, which might contribute to distress or sleep problems.
It also gives families control over when ChatGPT can be used. Quiet hours can be customized to suit each household.

If ChatGPT detects signs of acute emotional distress or self-harm risk, the system may issue alerts. Reviewers trained in safety evaluate these moments.
In rare cases, after human review of safety signals, parents may be notified; however, full chat transcripts are not shared by default.
The system is not perfect, and false positives are possible. The alert is intended to be a safety net, not surveillance.

To use linked parental controls, teen users must be aged 13 and up. If age cannot be reliably confirmed, ChatGPT may default to the teen-safe experience. OpenAI is developing age prediction tech to assist in determining whether a user is under 18.
These features aim to ensure that appropriate safety settings apply, even when the system has uncertainty.

Parents can choose to opt their linked teen’s data out of model training. This means teen chats won’t be used to train or improve future AI models.
This helps protect teens’ privacy and reduces the risk of their content being used in ways they did not expect. The option gives more control over how personal data is used.

Parents are not given full access to their teen’s chat history by default. Privacy is preserved in that parents can’t read transcripts unless a serious risk is flagged.
OpenAI keeps content private unless needed for safety review. This balance seeks to respect teen privacy. Teens are encouraged to engage responsibly, knowing their privacy is valued. The approach builds trust between parents, teens, and the platform.

OpenAI worked with mental health experts, policy makers, and advocacy groups, including Common Sense Media. These partners helped shape what content filters and risk detection should cover.
Guidance came from teen safety, child psychology, and regulatory feedback. The tools reflect research and collaborations. This cooperation ensures the safeguards are both practical and evidence-based.

AI usage by teens has been under scrutiny for potential harms to mental health. Long conversations can degrade safety performance or lead to harmful prompting.
Parents sounded alarms about self-harm, suicidal ideation triggered in AI chats. These tools attempt to mitigate those risks. However, the tools are not cure-alls.

The features are rolling out now to all ChatGPT users on the web; mobile support is coming. OpenAI has stated that these parental controls will be available starting within the next month in many regions.
There may be local legal or regulatory delays. Some older accounts or regions might get updates later.

A key tension is protecting teens while respecting privacy. OpenAI’s design avoids giving parents access to private conversation transcripts unless necessary. Safety measures like disabling features or filters are done without overly invasive monitoring.
Teens retain some autonomy (e.g., they accept the linking). The controls are meant to empower families, not to surveil.
Is Google Gemini truly safe for kids and teens? Explore why new safety concerns put spotlight on Google Gemini risks for kids and teens.

Families should know that these tools are not always perfect. False alarms or gap areas may exist. Parental involvement beyond tools, conversations, and guidance still matters.
Understanding settings and using them actively helps. Teens may object to some restrictions, so communication helps. Checking updates is important as features evolve. Overall, this is a useful step toward safer AI use.
Could this act change how kids use the web forever? Explore how the Kids Online Safety Act could reshape the internet.
Which of these parental tools do you think is most important for protecting teens, or maybe one you’d feel uneasy about using? Tell us in the comments.
Read More From This Brand:
Don’t forget to follow us for more exclusive content right here on MSN.
This slideshow was made with AI assistance and human editing.
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Dan Mitchell has been in the computer industry for more than 25 years, getting started with computers at age 7 on an Apple II.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!