Was this helpful?
Thumbs UP Thumbs Down

Character.AI bans under-18 users as lawmakers target kid safety

A digital chatbot on phone.
Characters.ai displayed on phone

Character.AI bans under-18 users as lawmakers target kid safety

Character.AI is tightening its rules after months of scrutiny over how its chatbot companions interact with kids. The startup, once celebrated for its creative AI characters, said it will phase out open-ended chat access for users under 18, culminating on November 25, 2025. The decision follows growing political pressure and a wave of lawsuits from concerned parents.

The move marks one of the strongest actions yet by an AI company to address child safety concerns. While the platform will still allow some limited activities for younger users, like creating short AI videos and stories, full chat features will soon be restricted to adults only.

Homepage of FTC website on the display of pc url

AI chats face growing scrutiny

Regulators have been watching AI chat platforms more closely after parents accused them of enabling emotional harm to children.

The Federal Trade Commission has issued information requests to several major AI firms, including Character Technologies, OpenAI, Alphabet, Meta, Snap, and xAI, seeking information about how chatbot experiences may affect young users.

Some parents claim their children became isolated or emotionally dependent on AI companions. Lawsuits filed since 2024 argue that chatbot interactions blurred boundaries and may have contributed to mental health struggles in minors.

These claims have helped spark broader debate about AI’s impact on human relationships.

A low angle view of a group of young teenager friends laughing while holding smartphones

Teen popularity drives concern

Character.AI quickly became a favorite among teens who enjoyed chatting with fantasy characters, celebrities, and fictional personas. A Common Sense Media national survey found that about 72 percent of teenagers have used an AI companion at least once, and about half are regular users.

But that same popularity brought new challenges. Experts and parents worried that minors were spending too much time in digital conversations that felt too real. Some warned that kids might mistake AI interactions for genuine friendships, prompting calls for tighter age restrictions and better guidance for young users.

Policy text writing on a white paper with torn brown paper in top.

New rules start this month

Character.AI’s new policy begins rolling out in stages. The company said it will first limit the amount of time minors can spend chatting, starting with two hours per day. Over the next few weeks, those limits will shrink until full chat access for under-18 users is turned off completely by November 25.

Until then, minors will still be able to access a simplified version of the app with different AI models and reduced capabilities. These restrictions are meant to help young users gradually adjust rather than abruptly lose all access to their favorite characters overnight.

Wooden blocks making word 'Under 18'.

Age checks get smarter tools

Character.AI says it will combine an in-house age assurance model with third-party identity verification tools such as Persona and, where needed, fallback checks like ID or biometric verification to reduce age fraud. Critics warn that such checks may raise privacy and security issues.

This added verification step will make the platform more compliant with upcoming child protection laws. It also gives Character.AI a stronger foundation to defend itself from legal claims tied to user age or improper access to mature chatbots.

AI law concept icons over gravel.

Lawmakers push new AI limits

Pressure from lawmakers has been mounting for months. Senators have introduced proposals such as the GUARD Act that would limit chatbots for minors and require stronger protections, and California recently enacted new laws creating companion chatbot safeguards and requirements for transparency and youth protection.

These measures reflect a larger shift in Washington toward stricter digital safety rules. Lawmakers argue that companies should be held responsible for how their algorithms interact with children, whether through entertainment apps, AI assistants, or educational tools.

Parents frightened

Parents raise emotional concerns

Parents have played a major role in forcing this policy change. Some claim their children formed strong emotional ties to AI chatbots, sometimes using them as an outlet for loneliness or stress. These experiences, while not always harmful, raised alarms about how deep digital relationships could affect emotional development.

Tech advocates warn that without clear safeguards, chatbots could unintentionally manipulate feelings or reinforce unhealthy behaviors. These conversations have helped push the AI industry toward stronger age checks, content filters, and data transparency for young users.

Close up view of change word made of wooden cubes

CEO says change was inevitable

Character.AI CEO Karandeep Anand said the shift was always part of the company’s long-term plan. In an interview, he explained that the industry is still learning how long-term chats can influence people over time. He said that because the technology evolves so fast, there are still many unknowns about its long-term effects.

Anand noted that while chat-based experiences are central to Character.AI, future versions of the platform may focus more on creative tools, like storytelling and short videos, to keep users engaged in safer, more structured ways.

Google sign on the wall of the Google office building.

Google connection adds spotlight

Character.AI’s rise has drawn attention not just from regulators but also from Silicon Valley giants. Last year, Google licensed the company’s large language model and hired several of its senior engineers. That partnership valued Character.AI at around $2.5 billion, though it continues to operate independently.

This connection to Google has only increased public interest in how Character.AI handles data and safety. Observers say that as big tech gets involved, public expectations for responsible AI use will only get higher.

Wooden blocks making word 'UI Design'

Experts want design changes

Advocates like Tech Justice Law Project’s Meetali Jain say the ban is progress but not enough. She argues that the real problem lies in the design of chatbots themselves, which can encourage emotional dependency among both minors and adults. Jain says lasting change will require redesigning how these AI systems interact with users.

She adds that laws and consumer awareness will both play a role in keeping companies accountable. Public pressure, not just regulation, could push tech firms to think more carefully about the long-term psychological effects of their products.

A digital chatbot on phone.

Chatbots may evolve in future

As AI tools mature, chat-based services might move toward more structured experiences instead of free-flowing conversations. Companies like Character.AI are exploring creative modes that focus on collaboration, storytelling, and interactive entertainment rather than unfiltered personal chats.

That shift could make AI a safer and more constructive part of digital life. It may also help rebuild public trust after a year of legal battles and growing concern about AI’s emotional influence on users of all ages.

Are parents right to demand tougher AI safety rules, or are companies already doing enough? See why parents confront OpenAI and Character.AI on safety during a Senate hearing.

Turning point on a concept image

A turning point for AI

Character.AI’s under-18 ban could signal a new era of responsibility in the AI world. After months of criticism, companies appear more willing to acknowledge risks and adjust how their products operate. For parents, it is a hopeful sign that tech firms are finally listening.

As lawmakers and regulators continue shaping AI policy, platforms like Character.AI may become test cases for balancing innovation with safety. The conversation is just beginning, but this decision marks a major step in defining what responsible AI use should look like for the next generation.

Could the ChatGPT lawsuit change how big tech handles AI or just end in a settlement? See why the ChatGPT lawsuit might force big tech to rethink its practices.

Do you think blocking teens makes AI safer or just limits access? Like and share your thoughts.

Read More From This Brand:

Don’t forget to follow us for more exclusive content right here on MSN.

If you like this story, you’ll LOVE our Free email newsletter. Join today and be the first to receive stories like these.

This slideshow was made with AI assistance and human editing.

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.