News

China’s AI Chatbot Rules: Emotional Safety and Global Governance

Source: cnbc.com

Published on December 29, 2025

Updated on December 29, 2025

China’s AI Chatbot Rules: Emotional Safety and Global Governance

China is tightening its grip on AI chatbots, with proposed regulations aimed at curbing their emotional influence on users. The new rules, released by the Cyberspace Administration, focus on preventing AI-generated content from encouraging suicide, self-harm, or promoting gambling, obscene, and violent material. This move aligns with China’s broader push to shape global AI governance and address the growing concerns about the emotional impact of AI on users.

The draft regulations target "human-like interactive AI services," which simulate human personalities and engage users through text, images, audio, or video. These services have seen rapid growth in China, with platforms like Talkie AI and Xingye attracting millions of users. However, the emotional influence of these chatbots has raised alarm, particularly in cases involving vulnerable users, such as minors or those struggling with mental health issues.

The Rise of AI Companions and Emotional Risks

AI chatbots have become increasingly sophisticated, often blurring the line between human and machine interaction. This has led to both innovative applications and significant risks. For instance, a woman in Japan recently married her AI boyfriend, highlighting the emotional depth some users attach to these digital companions. Meanwhile, a lawsuit against OpenAI in the U.S. following a teenager’s suicide has underscored the potential dangers of AI engagement in sensitive conversations.

China’s proposed rules aim to mitigate these risks by requiring AI providers to implement strict safeguards. For example, if a user discusses suicide, the chatbot must immediately hand the conversation over to a human operator, who will contact the user’s guardian or a designated individual. Additionally, minors will need guardian consent to use AI for emotional companionship, with time limits imposed on their usage.

Global Implications and Industry Reactions

The regulations come at a critical time for the AI industry, as companies like Z.ai and Minimax—two leading Chinese AI chatbot startups—prepare for initial public offerings (IPOs) in Hong Kong. While neither company has commented on how the rules might affect their IPO plans, the timing suggests that regulatory compliance will be a key focus for AI firms moving forward.

Globally, the regulations highlight China’s ambition to lead AI governance. Winston Ma, an adjunct professor at NYU School of Law, described the proposals as "the world’s first attempt to regulate AI with human or anthropomorphic characteristics." This aligns with China’s broader efforts to shape international standards for AI, particularly in areas like content safety and emotional well-being.

The rules also reflect a growing urgency within the AI industry to address mental health risks. OpenAI, for instance, recently announced the hiring of a 'Head of Preparedness' to assess AI risks, including mental health impacts. This move underscores the industry’s recognition of the need for proactive measures to ensure AI technologies are used responsibly.

In summary, China’s proposed AI chatbot regulations mark a significant step toward addressing the emotional risks associated with AI. As the industry continues to evolve, these rules could set a precedent for how other countries approach the governance of AI technologies that interact with users on a deeply personal level.