
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A report by Australia's eSafety Commissioner found that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are failing to protect children from sexually explicit content, self-harm, and suicide ideation. The platforms lack robust age verification and safeguards, exposing children to significant risks.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).[AI generated]