
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Medical experts have raised concerns about Snapchat's AI chatbot, 'My AI', which provides human-like responses that can mislead young users seeking mental health support. The AI sometimes gives authoritative but incorrect advice, potentially causing psychological harm and confusion, especially among minors who may struggle to distinguish AI from real people.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Snapchat AI chatbot) whose use has directly led to concerns about harm to users' mental health, particularly minors. The AI's misleading or incorrect responses pose a risk of injury or harm to health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as users are already relying on the AI for mental health support, and experts warn about the misinformation's impact. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.[AI generated]