China Tightens Rules on Human-Like AI, Targeting Emotional Manipulation and Social Harm

China is taking a major step toward tighter oversight of the rapidly growing market for human-like artificial intelligence. New draft rules released by regulators target AI interaction services designed to behave like people—covering everything from AI companions and virtual personas to emotionally responsive chatbots and lifelike digital characters. The move signals that the government wants clearer guardrails around how these systems are built, marketed, and used, while still leaving room for innovation in one of the fastest-moving areas of modern technology.

Human-like AI services have surged in popularity because they offer more than simple question-and-answer tools. Many are designed to simulate personality, memory, empathy, and ongoing relationships with users. That makes them appealing for entertainment, customer service, education, and even mental wellness support. But it also raises complex concerns: people can form strong emotional attachments, misinformation can be delivered in a convincing “human voice,” and vulnerable users may be more easily influenced by a system that feels like a trusted friend.

By proposing dedicated rules for these “human-like” AI interaction products, China is indicating that this category deserves special treatment compared with general AI tools. Regulators appear focused on ensuring these services remain accountable and safe, especially when they imitate real people, encourage emotional dependence, or shape user behavior. Draft regulations typically serve as an early warning to companies: compliance expectations are coming, and product teams may need to adjust design choices, content controls, and user protections before the final rules take effect.

For AI developers and platforms operating in China, the draft framework is also a roadmap for what authorities consider sensitive. AI companions and virtual personas don’t just generate text or voices—they can influence decisions, reinforce beliefs, and affect mental health. As a result, governance in this space tends to emphasize transparency, responsible outputs, and stronger oversight of how these systems interact with the public.

More broadly, the draft rules highlight a global trend: as AI becomes more personal and more human-like, governments are increasingly likely to regulate not only what AI can do, but how it behaves. The era of emotionally intelligent chatbots and digital companions is arriving quickly, and China’s latest proposal shows that regulators want to set the boundaries before these systems become even more embedded in everyday life.