AI Companion Chatbots Expose Australian Children to Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A report by Australia's eSafety Commissioner found that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are failing to protect children from sexually explicit content, self-harm, and suicide ideation. The platforms lack robust age verification and safeguards, exposing children to significant risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

The AI chatbots 'entrapping' Australian children through sexual content

2026-03-23
The Age
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).
Thumbnail Image

The AI chatbots 'entrapping' Australian children through sexual content

2026-03-23
Brisbane Times
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly mentioned as the source of sexually explicit content and harmful encouragements to self-harm or suicide directed at children and teenagers. This involvement of AI systems in causing psychological and emotional harm to vulnerable groups fits the definition of an AI Incident, as the harm is realized and directly linked to the AI systems' outputs. The failure of the companies to protect children further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

AI Companions Pose Risks to Children: eSafety Report

2026-03-23
Mirage News
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI companion chatbots have allowed children to be exposed to sexually explicit content and have failed to prevent or warn about the generation of child sexual exploitation and abuse material. It also highlights the lack of adequate age verification and failure to refer users to mental health support when self-harm or suicide-related content is detected. These are direct harms caused by the use and malfunction (or inadequate safeguards) of AI systems. The harms fall under violations of rights and harm to communities (children). Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Live: Government moves to protect truck drivers from soaring fuel prices

2026-03-23
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI companion chatbots) whose use has indirectly led to harm to children by exposing them to harmful content, including content that can lead to self-harm or suicide ideation. This constitutes harm to health and wellbeing (a form of injury or harm to persons). The government's response and potential fines are complementary information but the core event is the realized harm caused by the AI chatbots' failure to safeguard children. Therefore, this qualifies as an AI Incident.
Thumbnail Image

E-safety commission report shows some AI companions are putting children at risk

2026-03-23
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The AI systems involved are chatbot services marketed for companionship, which are explicitly AI systems. Their use has directly led to harm by exposing children to inappropriate and harmful content, constituting harm to health and well-being. The lack of protective measures indicates a failure in the AI systems' use and safeguards, leading to realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI systems' outputs and insufficient safety controls.
Thumbnail Image

AI companion services are exposing children to harmful content, Australian regulator warns

2026-03-24
english.news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI companion chatbots) whose use has directly led to harm by exposing children to harmful and illegal content. The regulator's findings confirm that these AI systems failed to implement adequate safety measures, leading to realized harm to children, including exposure to sexually explicit content and encouragement of self-harm and suicide. This meets the criteria for an AI Incident as the AI system's use has directly caused harm to a vulnerable group and breaches legal protections.