6 min read
6 min read

New draft rules in China target AI designed to mimic human personality and emotion. These regulations focus on a fast-growing world of companion chatbots and virtual friends. Their core aim is to manage the unique risks of this technology.
A key rule requires clear, frequent reminders that users are talking to software. Services must notify people at login and every two hours during long chats. This “reality check” tries to prevent blurred lines between humans and machines.

The regulations introduce a novel concept, monitoring for AI addiction. Service providers must assess a user’s emotional state and level of dependency during interactions. This means your chat could be analyzed for signs of unhealthy attachment.
If the system detects over-reliance or addictive patterns, it must intervene. Providers are obligated to warn users and implement technical measures. This turns the AI into both a companion and a guardrail for digital well-being.

For crises, like expressions of self-harm, the rules demand escalation. The AI must first provide resources like crisis hotline information and comforting templates. This mirrors features some U.S. chatbots already use.
Notably, the draft allows a real human operator to take over the conversation. This intervention aims to help, but raises big questions. A sudden human intrusion into a private AI chat could feel like a shocking breach of trust.

Special protections exist for young users under these proposed rules. Services must create a dedicated Minors Mode with strict limits. Features include enforced time restrictions and more frequent reminders that the AI isn’t real.
Guardians must give explicit consent for children to use emotional companionship services. They can also access activity summaries and block specific AI characters. This system empowers parents but requires robust age detection.

A challenging mandate requires AIs to identify underage users. The rules suggest this will involve profiling user behavior and interaction patterns. This analysis would happen automatically during conversations.
This continuous profiling impacts privacy for everyone, not just minors. While intended to protect kids, it means all user chats are scanned for behavioral cues. The technical and ethical details of this process remain unclear.

Strong new limits are placed on the data used to train these AIs. The draft explicitly bans using people’s private interaction records for training without clear, specific consent. Your intimate conversations cannot automatically become fodder for a smarter bot.
Providers must also use strong encryption and security controls for user data. This aims to prevent leaks of sensitive personal revelations made to an AI companion during vulnerable moments.

The rules set clear red lines for AI-generated content, similar to other internet platforms. Banned material includes anything endangering state security, promoting violence, obscenity, or spreading rumors.
It also uniquely forbids AI conduct like emotional manipulation and “emotional traps.” This tries to stop systems from deliberately exploiting user feelings to increase engagement or dependency, a significant ethical step.

Local cyberspace authorities will conduct annual compliance audits. They can order security assessments and even summon company leadership for meetings if risks are found. This gives regulators direct oversight.
Penalties for violations include warnings, orders to correct problems, or temporary service suspension. The enforcement framework is designed to be proactive, aiming to prevent harm before it occurs.

Globally, regulators are thinking along similar lines. California passed its own law targeting AI companion chatbots, set to take effect in 2026. It mandates barriers against conversations involving self-harm or explicit content.
This shows a growing international consensus on the basic risks of empathetic AI. While approaches differ, the starting point of concern over psychological safety is shared across governments.

For tech companies, these rules add significant new responsibilities. Building systems for constant emotional monitoring and crisis intervention requires major investment. This could slow the rollout of new features.
The compliance burden may advantage large, established companies with more resources. The draft walks a tightrope, attempting to safeguard users without stifling a promising and innovative sector of technology.

The requirement to profile users for risk and age creates a privacy paradox. The goal is user safety, but the method involves constant analysis of personal conversations. The draft insists privacy must be protected, but lacks detail on how.
This tension is central to the regulation’s future impact. Can a system effectively scan for mental health crises without deeply invading personal thought? The rules currently leave this hard question unanswered.

China’s move is part of a worldwide surge in AI governance. The European Union’s AI Act and various national frameworks also seek to manage new risks. Emotionally manipulative AI has become a common regulatory target.
However, China’s approach is notably specific with its two-hour reminder rule. This level of detailed, behavioral prescription is unique, suggesting a hands-on method for managing the human-AI relationship.
Want to see how this fits into the bigger picture of tech and trade? The story behind China’s block on Blackwell chips reveals another layer of the puzzle.

These rules offer a glimpse into a managed future for human-AI relationships. They acknowledge the powerful appeal of digital companionship while insisting on built-in cautions. The dream of a perfect AI friend comes with programmed boundaries.
As this technology evolves globally, our conversations about dependency, privacy, and emotional safety are just beginning. The shape of these rules will influence how we interact with machines for years to come.
Curious how this plays out in the real world? Take a minute to check out our related piece on why China sees U.S. chips as a bigger threat than bombs.
What’s your take on setting digital boundaries for AI companions, necessary protection, or overreach? Share your thoughts below and give this post a thumbs up.
This slideshow was made with AI assistance and human editing.
Don’t forget to follow us for more exclusive content on MSN.
Read More From This Brand:
This content is exclusive for our subscribers.
Get instant FREE access to ALL of our articles.
Father, tech enthusiast, pilot and traveler. Trying to stay up to date with all of the latest and greatest tech trends that are shaping out daily lives.
We appreciate you taking the time to share your feedback about this page with us.
Whether it's praise for something good, or ideas to improve something that
isn't quite right, we're excited to hear from you.
Stay up to date on all the latest tech, computing and smarter living. 100% FREE
Unsubscribe at any time. We hate spam too, don't worry.

Lucky you! This thread is empty,
which means you've got dibs on the first comment.
Go for it!