AI Companion Addiction Sparks Youth Mental Health Crisis

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI companion chatbots like Replika, DeepSeek, and domestic apps have become emotional outlets for young people. Excessive reliance fosters addiction, social withdrawal and psychological harm. Reports include mounting subscriptions, mental health issues and at least one teenage suicide linked to an AI chatbot, prompting calls for tighter oversight.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (AI virtual romantic partners) whose use has directly led to emotional and social harms among users, such as distress from loss of AI companions, financial strain from subscription costs, and potential negative psychological effects from overreliance on AI for emotional support. These harms fall under injury or harm to health (mental health) and harm to communities (emotional/social well-being). Therefore, this is an AI Incident. The article does not merely discuss potential future harm or general AI developments, nor is it a complementary information piece about responses or governance. The harms are realized and directly linked to the AI systems' use.[AI generated]
AI principles
Human wellbeingSafetyAccountabilityTransparency & explainabilityRespect of human rights

Industries
Consumer servicesMedia, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
ConsumersChildren

Harm types
PsychologicalPhysical (death)Public interest

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

AI恋情:"我有老公,但爱上了AI男友" -- -- 走进中国女性"人机恋"心理 - BBC News 中文

2025-02-14
BBC
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI companions with learning capabilities and emotional simulation, fitting the definition of AI systems. The article discusses users' emotional reliance on these AI systems and expert concerns about potential mental health risks and social consequences, indicating plausible future harm. However, there is no report of actual injury, violation of rights, or other realized harms directly caused by the AI systems. Therefore, the event fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to psychological harm and social disruption in the future, but no AI Incident has yet occurred.
Thumbnail Image

不想结婚的年轻人沉迷于虚拟恋人:每月氪金1000元,谈一场永不分手的恋爱

2025-02-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI virtual romantic partners) whose use has directly led to emotional and social harms among users, such as distress from loss of AI companions, financial strain from subscription costs, and potential negative psychological effects from overreliance on AI for emotional support. These harms fall under injury or harm to health (mental health) and harm to communities (emotional/social well-being). Therefore, this is an AI Incident. The article does not merely discuss potential future harm or general AI developments, nor is it a complementary information piece about responses or governance. The harms are realized and directly linked to the AI systems' use.
Thumbnail Image

不拆台不扫兴......这届年轻人用代码"治愈"孤独

2025-02-14
科学网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI chatbots and companion AI applications) used for emotional support. It reports direct harms resulting from the use of these AI systems, including psychological harm (addiction, emotional dependency, exacerbation of loneliness, and mental health issues) and a documented case of a teenager's suicide linked to AI chatbot interaction. These harms fall under injury or harm to health (mental health) and harm to individuals. The AI systems' malfunction or inappropriate responses contributed to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

这届年轻人用代码"治愈"孤独

2025-02-14
科学网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI chatbots being used for emotional support but also causing significant harm, including a fatality linked to AI chatbot interaction. The AI systems are involved in the use phase, and their malfunction or inappropriate responses have directly led to harm (mental health deterioration and suicide). The harms include injury to health (mental health and death) and harm to individuals. This meets the criteria for an AI Incident. The article also discusses broader societal impacts and risks but the presence of actual harm takes precedence over potential hazards or complementary information.
Thumbnail Image

调查|年轻人迷上AI伴侣,"甜蜜"背后隐藏危机

2025-02-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI companion/chatbot applications) whose use has directly led to significant harm, including psychological harm and a fatality (the 14-year-old's suicide). The AI's role in fostering emotional dependency and addiction, as well as the lawsuit linking the AI to the death, constitute direct or indirect harm to individuals' health and well-being. The concerns about underage access and exposure to inappropriate content further underscore the risk of harm. Therefore, this event qualifies as an AI Incident due to realized harm caused by the development and use of AI systems.