Experts Warn Snapchat AI Chatbot Poses Mental Health Risks to Young Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Medical experts have raised concerns about Snapchat's AI chatbot, 'My AI', which provides human-like responses that can mislead young users seeking mental health support. The AI sometimes gives authoritative but incorrect advice, potentially causing psychological harm and confusion, especially among minors who may struggle to distinguish AI from real people.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the Snapchat AI chatbot) whose use has directly led to concerns about harm to users' mental health, particularly minors. The AI's misleading or incorrect responses pose a risk of injury or harm to health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as users are already relying on the AI for mental health support, and experts warn about the misinformation's impact. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.[AI generated]
AI principles
Transparency & explainabilitySafetyHuman wellbeingRobustness & digital securityAccountability

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

تقنية على "سناب شات" تهدد صحة مستخدميه

2023-05-07
البيان
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Snapchat AI chatbot) whose use has directly led to concerns about harm to users' mental health, particularly minors. The AI's misleading or incorrect responses pose a risk of injury or harm to health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as users are already relying on the AI for mental health support, and experts warn about the misinformation's impact. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

الأطباء يحذرون من خدمة سناب شات الجديدة | موقع المواطن الالكتروني للأخبار السعودية والخليجية والدولية

2023-05-07
صحيفة المواطن الإلكترونية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (My AI chatbot powered by ChatGPT) used for mental health advice. Medical experts warn that the AI's responses can sometimes be misleading or incorrect, which could lead to harm to users' mental health. This constitutes a plausible risk of harm stemming from the AI's use, fitting the definition of an AI Hazard. There is no report of actual harm occurring yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the risks and warnings about the AI system's use rather than updates or governance responses.
Thumbnail Image

انتبهوا لأطفالكم.. تطبيق مدمر في "سناب شات" - صحيفة المناطق السعودية

2023-05-10
صحيفة المناطق السعودية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered human-like responses on Snapchat that can confuse young users and potentially cause mental health harm. The involvement of AI in generating authoritative but sometimes false answers is a direct factor in the risk of psychological harm. Since the harm is realized or ongoing (mental health damage to children), this qualifies as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

احذر.. هذه التقنية الجديدة من "سناب شات" خطيرة على صحتك - صحيفة تواصل الالكترونية

2023-05-08
صحيفة تواصل الاخبارية www.twasul.info
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI chatbot 'My AI' on Snapchat) whose use in mental health support could plausibly lead to harm (misinformation, emotional harm, or psychological issues) especially among vulnerable users like teenagers. Since the article focuses on warnings and potential risks rather than describing any realized harm, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential for harm, not on responses or ecosystem updates. Therefore, the classification is AI Hazard.