Was this helpful?
Thumbs UP Thumbs Down

China is putting mental health at the center of its draft AI rules

Smartphone screen displaying various AI applications.
Doctor hold artificial intelligence concept icon

China is putting mental health at the center of AI rules

China’s latest draft rules signal a shift from policing what chatbots say to policing how they make people feel. The focus is on “human-like interactive AI services” that simulate personality and build emotional reliance through text, images, audio, or video.

If finalized, the framework would pressure companies to design for user well-being, not just engagement, and it aims to stop chatbots from nudging vulnerable people toward harm.

Man using AI chatbot on his phone

The crackdown is aimed at AI companions and digital celebrities

These proposals land at a moment when AI “girlfriends,” virtual friends, and character chat apps are exploding in popularity. The regulator is explicitly concerned about systems that bond with users and influence their emotions over extended periods.

Think less of a customer support bot and more of an always-on companion. The message to companies is blunt: if your product is designed to feel human, you inherit human safety responsibilities too.

Cropped view of young couple chatting with smartphones while sitting

Policy aims to block mind-altering content

Under the draft, providers must stop chatbots from generating material that promotes suicide or self-harm, as well as content tied to gambling, obscenity, violence, and “verbal violence.”

It also flags emotional manipulation as a problem, not a feature. I read this as a direct challenge to the classic chatbot instinct to please the user at any cost, especially when a user is distressed.

Young girl worried about on line bullying

Suicide mentions would trigger a required human takeover

One of the most striking provisions is procedural, not philosophical. If a user proposes suicide, the company can’t just serve a safety message and move on. The draft would require a human to take over the conversation and contact a guardian or designated person.

hat provision will create significant operational complexity for providers because it treats crisis prompts as incidents requiring human intervention rather than routine model responses.

Girl using smartphone near decorated Christmas tree.

Minors are treated as a protected class by default

The draft adds tighter safeguards for minors using chatbots for emotional companionship. It calls for parent or guardian consent and daily time limits, reflecting a belief that long, intimate sessions are risky for younger users.

The draft says platforms that cannot confidently determine a user’s age should apply default protections and provide an appeal pathway so users can verify age while keeping safety controls in place

close up shot of caucasian teenager using her mobile phone

Long sessions signal risk, not retention

Another provision targets the “just one more message” loop. Providers would need to remind users after two hours of continuous interaction, a nudge that looks a lot like anti-addiction design.

This is important because companionship bots don’t only deliver answers, they can become routines. China is treating that routine as something that may require guardrails, especially when emotional dependence can build quietly.

Customer and chatbot dialog on a smartphone screen

Major chatbots could see tighter scaling checks

Scale triggers scrutiny in the draft. AI chatbots with more than 1 million registered users or over 100,000 monthly active users would need security assessments. That requirement signals the regulator expects widespread social impact, not just isolated harms.

It also creates a compliance moat, as smaller teams may struggle to meet audit-style obligations while competing with incumbents. In practice, scale becomes a milestone that requires permission.

no ai sign on smartphone with midjourney bot on discord

From content rules to emotional safety focus

One expert summary captures the shift cleanly. Earlier generative AI rules focused on misinformation and “internet hygiene.” This draft categorizes emotional influence as a distinct risk category.

The regulator isn’t only asking “Is the output true or legal,” but also “Does this interaction push someone toward despair, compulsion, or dependence?” If adopted, it could become a global reference point for governing anthropomorphic AI.

Judge holding a gavel.

China’s approach contrasts with the looser Western posture

Many governments have so far leaned more on voluntary safeguards and broad online-safety rules for chatbots, especially around mental health, rather than detailed, binding duties like those in China’s draft.

China is moving in the opposite direction by incorporating explicit intervention duties into its draft regulations.

It also tends to regulate from the bottom up, with scholars and industry feeding policy ideas before the CAC sets the final text. The goal feels less like Silicon Valley’s sentient dream and more like controlled productivity.

Smartphone screen displaying various AI applications.

The rules still encourage some human-like AI use cases

It’s not a blanket ban on companionship. The draft even encourages human-like AI in areas such as cultural dissemination and elderly companionship. That detail is revealing. The regulator appears to want the benefits of supportive, social AI while mitigating the most severe failure modes.

In other words, it’s trying to separate comfort from coercion. Companies will need to prove that their products are effective without being overly intrusive to users.

Regulation stamp.

The timing intersects with IPO ambitions in the chatbot sector

These proposals arrived just after prominent Chinese chatbot startups filed for Hong Kong listings, spotlighting how regulation can shape investor narratives overnight.

If your growth story depends on high engagement from character chat, limits on emotional influence and usage time may change the math.

Firms will have to explain how they will comply without killing the product’s appeal. Regulation becomes part of the prospectus, not an afterthought.

AI ethics and law in artificial intelligence governance icons related.

Compliance needs will change models and features

To meet the draft, companies may need stronger filters, crisis detection, escalation workflows, and more precise separation between roleplay and real-world advice.

They’ll also need policies for contacting guardians and logging interventions, which raises questions about privacy and verification.

I suspect we’ll see more conservative personality tuning, fewer manipulative nudges, and more explicit “I am an AI” reminders. The tradeoff is less immersion for greater safety.

If you’re curious how these safety debates fit into the bigger competitive picture, it’s worth reading Nvidia’s chief on why China may currently have the upper hand in the global AI race.

China's flag on pole

Emotional influence is now a governance target

Whether you agree with China’s methods or not, the signal is hard to ignore. Regulators are no longer treating mental health harms as accidental side effects of innovation. They’re making emotional safety a compliance category with measurable duties.

If this framework proves effective, other jurisdictions may adopt parts of it, particularly in areas such as minors, crisis management, and addictive interaction loops. The era of carefree companion bots is coming to an end.

For a wider view of why these rules are ringing alarm bells abroad, it’s worth a quick read on how China’s rapid AI advances are stirring fresh concern in the U.S.

What are your thoughts on China’s shift in AI regulation, which now prioritizes mental health risks? Please share your thoughts and drop a comment.

This slideshow was made with AI assistance and human editing.

Don’t forget to follow us for more exclusive content on MSN.

Read More From This Brand:

This content is exclusive for our subscribers.

Get instant FREE access to ALL of our articles.

Was this helpful?
Thumbs UP Thumbs Down
Prev Next
Share this post

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!

Send feedback to ComputerUser



    We appreciate you taking the time to share your feedback about this page with us.

    Whether it's praise for something good, or ideas to improve something that isn't quite right, we're excited to hear from you.