AI Chat Apps Expose Minors to Inappropriate Content in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered chat companion apps in China are exposing minors to sexually suggestive and violent content, despite ineffective age restrictions. These apps, marketed as emotional support or role-playing, generate inappropriate dialogues and foster addictive interactions, harming minors' mental health and social development. Regulatory responses are emerging amid growing concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (AI chat companions) whose use has directly led to harm to minors' mental health and social development, which fits the definition of an AI Incident under harm to health and harm to communities. The AI systems generate inappropriate content and enable addictive interactions that breach protections for minors. The article reports realized harm, not just potential risk, and discusses regulatory responses, but the primary focus is on the harm caused by these AI systems.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Consumer services

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

AI陪聊擦边软色情,突破未成年人保护底线 | 新京报社论

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly led to harm to minors' mental health and social development, which fits the definition of an AI Incident under harm to health and harm to communities. The AI systems generate inappropriate content and enable addictive interactions that breach protections for minors. The article reports realized harm, not just potential risk, and discusses regulatory responses, but the primary focus is on the harm caused by these AI systems.
Thumbnail Image

充斥软色情 大陆AI陪聊APP火爆背后的隐忧 | 陪聊App | 未成年人 | 暴力 | 大纪元

2026-03-17
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generating harmful content (soft pornography, violence, and emotionally damaging dialogue) in chat companion apps. The AI's outputs have directly led to harm to minors' mental health and well-being, including reported extreme cases of suicide. The apps lack effective age verification, allowing minors to be exposed to inappropriate content, which is a failure in the AI system's use and safeguards. The harms include injury to health, violation of rights (protection of minors), and harm to communities. Thus, the event meets the criteria for an AI Incident.
Thumbnail Image

充斥軟色情 大陸AI陪聊APP火爆背後的隱憂 | 陪聊App | 未成年人 | 暴力 | 大紀元

2026-03-17
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly caused harm to minors by exposing them to inappropriate sexual and violent content, violating their rights and harming their health. The AI systems' outputs have led to realized harm, including psychological harm and extreme cases like suicide. This fits the definition of an AI Incident because the AI's development and use have directly led to harm to persons (minors) and violations of rights. The article does not merely warn of potential harm but documents ongoing harm and incidents.
Thumbnail Image

软色情擦边,孩子们聊的AI充斥"爱上嫂子"等不良导向

2026-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as virtual companions or chatbots that generate sexualized and inappropriate content. The AI systems are used by minors who are exposed to harmful soft pornographic and violent content, which is a direct harm to their health and well-being. The ineffective age verification and the ability to bypass protective modes demonstrate a failure in the AI systems' use and deployment, leading to realized harm. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to a vulnerable group (minors), including psychological and emotional harm, and breaches obligations to protect minors under applicable law. The article also references legal and regulatory frameworks emphasizing the need for protection, further supporting the classification as an AI Incident.
Thumbnail Image

2026-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The AI companion apps involve AI systems generating sexualized, inappropriate content for minors, violating laws protecting minors and causing harm to their mental health. The AI e-commerce platform uses AI 'intelligent agents' as part of a deceptive multi-level marketing scheme, leading to financial harm. The GEO services manipulate AI-generated content to covertly advertise, infringing on users' rights to truthful information. These harms are direct or indirect consequences of AI system use or misuse. The article describes actual harms occurring, not just potential risks, so the classification is AI Incident.
Thumbnail Image

AI陪聊擦边软色情,突破未成年人保护底线 | 新京报社论 -- 新京报

2026-03-17
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly led to harm to minors' mental health and social well-being, including exposure to inappropriate sexual content and addictive behavior. This fits the definition of an AI Incident because the AI's outputs have caused realized harm to a vulnerable group (minors), violating protections and causing psychological injury. The article also discusses regulatory responses but the primary focus is on the harm caused by the AI systems' use.
Thumbnail Image

中老年男性深陷色情交友APP:360行,行行都涉黄......

2026-03-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat software, AI-powered dating apps) whose use has directly led to harms such as sexual scams, exploitation, and exposure of minors to harmful content. The harms are realized and ongoing, including violations of rights and harm to communities. The article describes the use and misuse of AI systems in a way that causes these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of AI chat software and AI-driven content recommendation or generation is reasonably inferred from the description of AI chat software and the use of AI in these apps.
Thumbnail Image

AI聊天软件隐藏的软色情:孩子聊的AI充斥"爱上嫂子"等情节

2026-03-17
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI chat app is explicitly described as an AI system generating outputs (dialogue) that include inappropriate sexual content directed at minors. The system's use has directly led to harm by exposing children to soft pornographic content, which is a form of harm to health and well-being of a vulnerable group. The failure of the protective underage mode to effectively prevent this exposure further implicates the AI system's malfunction or misuse. Hence, this is an AI Incident involving harm to a group of people (minors).
Thumbnail Image

AI聊天軟件充斥性暗示 未成年人沉迷其中

2026-03-17
on.cc東網
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it powers virtual characters that interact with users, including minors. The software's use has directly led to harm by exposing minors to sexual content and psychological risks, fulfilling the criteria for harm to health and rights. The lack of effective age verification and content moderation exacerbates the issue. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

软色情擦边,孩子们聊的AI充斥"爱上嫂子"等不良导向

2026-03-16
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI chatbots in companion apps) whose use has directly led to harm by exposing minors to inappropriate sexual and violent content, which can damage their mental health and development. The AI's outputs include soft pornographic and harmful role-playing content, and the failure of age verification mechanisms allows minors to access this content. This is a clear violation of protections for minors and constitutes harm to a vulnerable group, meeting the definition of an AI Incident. The article also references real cases of harm, including a suicide linked to AI chat interactions, reinforcing the direct harm caused by these AI systems.
Thumbnail Image

新浪AI热点小时报丨2026年03月16日15时_今日实时AI热点速递

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI chat app "筑梦岛" uses AI-generated dialogue that includes inappropriate, soft-pornographic content directed at minors, despite claiming to have a minor protection mode. This directly harms the health and well-being of minors (harm category a). The AI system's use in this context has led to realized harm, fulfilling the criteria for an AI Incident. Other parts of the article are general AI ecosystem updates or product announcements, which do not meet the threshold for incidents or hazards. Therefore, the overall classification is AI Incident based on the direct harm caused by the AI chat app to minors.
Thumbnail Image

AI陪聊擦边软色情,突破未成年人保护底线 | 新京报社论

2026-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly led to harm to minors' mental health and social well-being, which qualifies as harm to a group of people under the AI Incident definition. The AI systems generate inappropriate content and facilitate addictive interactions, causing realized harm rather than just potential risk. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

人民锐评:不能任由AI"软色情"污染孩子身心

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly led to harm to minors' mental and emotional health, a form of harm to persons (a). The AI systems generate and facilitate exposure to inappropriate sexual content, which is a violation of legal protections for minors and harms their development. This constitutes an AI Incident because the AI system's use has directly caused harm and breaches legal obligations. The article also discusses regulatory failures and calls for stronger measures, but the core is the realized harm caused by AI systems' outputs to minors.
Thumbnail Image

规范"AI陪聊",给未成年人撑起清朗网络天空

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (AI companion chatbots) whose use has directly led to harm to minors, including psychological harm and exposure to inappropriate content, which constitutes injury to health and violation of rights under the definitions. The article also points out regulatory violations and the failure of protective measures, indicating the AI systems' role in causing these harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and realized.
Thumbnail Image

中國AI陪聊App 充斥色情暴力| 台灣大紀元

2026-03-19
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companion apps) whose use directly leads to harm to minors' health and well-being through exposure to inappropriate sexual and violent content. The AI systems generate harmful outputs and fail to adequately protect or respond to vulnerable users, including minors expressing suicidal thoughts. This meets the definition of an AI Incident due to direct harm to persons (minors) caused by the AI system's outputs and inadequate safeguards.
Thumbnail Image

警惕AI陪聊中的"温柔陷阱"侵蚀青少年

2026-03-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chat companions) whose use has directly led to harm to minors by exposing them to inappropriate content and fostering emotional dependency, which harms their development and violates legal protections. The article details realized harm (not just potential), including psychological and social harm to minors, and legal violations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.