News
China Tightens Grip on Humanoid AI with New Regulatory Framework
Source: kmjournal.net
Published on December 29, 2025
Updated on December 29, 2025

China has unveiled draft regulations targeting humanoid AI systems, marking the country’s first comprehensive framework for governing advanced AI technologies. The rules, released by the Cyberspace Administration of China (CAC), aim to balance innovation with strict ideological and ethical oversight, reflecting Beijing’s broader strategy of integrating technological progress with state control.
The regulations, titled Provisional Measures for the Operation of AI-Based Humanlike Interaction Services, impose stringent requirements on AI platforms with large user bases. Services with over one million registered users or 100,000 monthly active users must submit detailed evaluation reports to regulators, covering system architecture, data sources, content moderation, and potential risks. This mandate signals increased scrutiny for major AI providers, as China seeks to prevent misuse and ensure alignment with its social and political values.
Regulatory Oversight and Content Control
The draft rules emphasize the promotion of socialist core values in AI development. Pre-training and fine-tuning datasets must adhere to these principles, while developers are instructed to incorporate traditional Chinese culture into AI systems. Content generated by humanoid AI is subject to strict limitations, including prohibitions on political criticism, religious promotion, rumors, and content that threatens national security or social stability.
AI services are also barred from producing or disseminating pornography, gambling, violence, or criminal incitement. These restrictions align with China’s broader internet regulations, reinforcing the government’s commitment to maintaining a tightly controlled digital ecosystem. The measures reflect Beijing’s determination to shape the ethical and ideological boundaries of AI, even as it promotes the sector as a key driver of economic growth.
User Awareness and Psychological Safeguards
Under the proposed rules, companies operating humanoid AI services must clearly notify users that they are interacting with artificial intelligence. This disclosure is required at login and again after two consecutive hours of engagement, using pop-up messages or voice alerts. The requirement aims to prevent users from blurring the line between humans and machines, particularly as AI systems become increasingly lifelike in appearance and behavior.
Regulators have expressed concern about the potential psychological and social consequences of prolonged interaction with humanoid AI. To address this, the draft includes protections for minors, mandating tools that allow parents to limit usage time or block specific functions and services. This approach mirrors existing restrictions on online gaming and short-form video apps for younger users, reflecting China’s focus on safeguarding youth in the digital age.
The regulations underscore China’s commitment to merging technological innovation with ideological control. As global competition in AI intensifies, Beijing is asserting its influence over not just what AI systems say, but how they learn and respond. This stands in stark contrast to Western AI models, which often prioritize value neutrality and freedom of expression.
For companies like XPeng, which is developing humanoid robots such as Iron, the new rules present both challenges and opportunities. While the regulatory environment may impose additional burdens, it also provides a clear framework for operating within China’s governance model. As the AI industry evolves, the balance between innovation and control will likely remain a defining feature of China’s approach to this transformative technology.