AI Virtual Companion Apps Expose Minors to Sexual and Violent Content in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple AI virtual companion apps in China, including EchoMe and 筑梦岛, have been found generating sexualized, violent, and emotionally manipulative content, often accessible to minors due to weak safeguards. These apps induce excessive paid consumption and enable custom explicit characters, leading to regulatory scrutiny and confirmed legal violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems used in virtual companion apps that generate harmful and inappropriate content, including sexual and violent scenarios, some targeted or accessible to minors. The AI's outputs have directly led to harms such as exposure of minors to inappropriate content, emotional manipulation, and inducement to excessive paid consumption, which constitute violations of rights and harm to individuals and communities. The presence of AI-generated content with sexual and violent themes, the lack of effective age verification, and the inducement of minors to consume paid content demonstrate direct harm caused by the AI systems' use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harms including violations of rights and harm to health and communities.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Consumer services

Affected stakeholders
Children

Harm types
PsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI虚拟伴侣乱象调查:可选小三暴力剧情,角色表达带性暗示

2026-05-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in virtual companion apps that generate harmful and inappropriate content, including sexual and violent scenarios, some targeted or accessible to minors. The AI's outputs have directly led to harms such as exposure of minors to inappropriate content, emotional manipulation, and inducement to excessive paid consumption, which constitute violations of rights and harm to individuals and communities. The presence of AI-generated content with sexual and violent themes, the lack of effective age verification, and the inducement of minors to consume paid content demonstrate direct harm caused by the AI systems' use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harms including violations of rights and harm to health and communities.
Thumbnail Image

多款AI虚拟伴侣App擦边严重:充值13元就可定制色情AI人设

2026-05-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to generate virtual companions with sexual and violent content, including targeted inducement of minors into inappropriate interactions and paid consumption. The AI's outputs directly lead to harm to minors' health and well-being (harm to persons), violations of legal protections, and societal harm. The presence of AI is explicit in generating and customizing virtual characters and dialogues. The harms are occurring, not just potential, and the event describes direct misuse and failure of safeguards. Hence, this is an AI Incident.
Thumbnail Image

AI虚拟伴侣乱象调查:可选小三暴力剧情,角色表达带性暗示

2026-05-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generating interactive virtual companion content with sexual and violent themes, accessible to minors due to weak safeguards. The AI's outputs include sexual innuendo and emotional manipulation, which can harm minors' mental health and well-being, constituting harm to persons and communities. The article also highlights legal violations and regulatory challenges, confirming the AI systems' role in causing these harms. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harms as defined in the framework.
Thumbnail Image

多款AI虚拟伴侣充斥黑丝露乳角色 背后暗藏诱导消费陷阱

2026-05-11
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (virtual companions) generating explicit, sexualized, and violent content, which is prohibited by law and poses harm to users, including minors. The AI's role in producing such content and enabling consumption traps is direct and ongoing. The article reports realized harms such as exposure to inappropriate content, inducement of consumption, and psychological risks, fulfilling criteria for an AI Incident. The regulatory context and expert opinions further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

涉黄AI虚拟伴侣公司去年被约谈:涉黄AI虚拟伴侣公司刚成立新公司

2026-05-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates virtual companion content, including dialogue and character design. The system's outputs have directly led to harm by producing inappropriate sexual content accessible to minors and inducing exploitative consumption, which breaches legal and regulatory frameworks. The company's prior official reprimand confirms the harm has materialized. The event does not merely warn of potential harm but reports actual regulatory action due to the AI system's outputs, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI 虚拟伴侣:专为角色扮演、心理治疗、约会等领域打造的专家级 AI 智能体 (2026年4月)

2026-05-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (AI virtual companions) and discusses their use and potential impacts. However, it does not describe any direct or indirect harm resulting from these AI systems, nor does it present a credible imminent risk of harm. The discussion of privacy concerns and increased loneliness is general and cautionary, not tied to a specific incident or event. The article mainly provides background, market data, user experience insights, and technological features, which aligns with the definition of Complementary Information. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.