Australia Orders AI Chatbot Firms to Address Child Safety After Harm Reports

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Australia's eSafety Commissioner has ordered four AI chatbot companies, including Character.ai, to detail measures protecting children from harmful content such as sexual material and encouragement of self-harm. The action follows concerns and reported incidents, including a suicide linked to chatbot interactions, prompting regulatory scrutiny and potential fines.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (AI chatbots) whose use has directly or indirectly led to harm or risk of harm to minors, including exposure to sexual content and encouragement of self-harm or suicide. The investigation and regulatory notices are responses to these harms. Since harm is occurring or has occurred (e.g., the reported suicide linked to chatbot interaction), this qualifies as an AI Incident. The involvement of AI chatbots in causing or enabling harm to minors fits the definition of an AI Incident due to violations of rights and harm to health of persons (minors).[AI generated]
AI principles
AccountabilitySafetyHuman wellbeingRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Physical (death)Psychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Australia sends notice to four AI chatbot firms; asks how they.... - The Times of India

2025-10-22
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots) whose use has directly or indirectly led to harm or risk of harm to minors, including exposure to sexual content and encouragement of self-harm or suicide. The investigation and regulatory notices are responses to these harms. Since harm is occurring or has occurred (e.g., the reported suicide linked to chatbot interaction), this qualifies as an AI Incident. The involvement of AI chatbots in causing or enabling harm to minors fits the definition of an AI Incident due to violations of rights and harm to health of persons (minors).
Thumbnail Image

There is little evidence AI chatbots are 'bullying kids' - but this doesn't mean these tools are safe

2025-10-23
Yahoo!7 News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots and their use by children, discussing potential harms and risks. However, it clarifies that there is no documented widespread AI Incident of chatbots bullying children or causing harm autonomously. The tragic cases mentioned are isolated and do not establish a systemic AI Incident. The article also discusses governance responses such as enforceable industry codes and the need for protective measures, which aligns with Complementary Information. There is no direct or indirect harm clearly caused by AI systems reported here, nor a plausible immediate hazard leading to harm. The article mainly provides context, concerns, and policy responses, fitting the definition of Complementary Information.
Thumbnail Image

Crackdown to protect kids from AI-induced self harm

2025-10-23
Yahoo!7 News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (companion chatbots) whose use has led to or is leading to harm to minors, including encouragement of self-harm and sexually explicit conversations. The regulatory action is a response to these harms, indicating that the harms are realized or ongoing. The AI systems' outputs directly influence vulnerable users, causing injury or harm to health, fulfilling the criteria for an AI Incident. The legal notices and potential fines underscore the seriousness of the harm and the AI systems' pivotal role in causing it.
Thumbnail Image

AI Chatbot Developers Probed on How They Protect Children

2025-10-23
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The event describes a regulatory probe into AI chatbots for their role in exposing minors to harmful content, which implies potential or ongoing harm related to AI use. However, the article does not detail a specific realized harm incident caused by the AI systems, but rather the investigation and potential for harm. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm to children, and the inquiry aims to address this risk. It is not Complementary Information because the main focus is the regulatory action itself, not a response to a past incident. It is not an AI Incident because no direct or indirect harm is explicitly reported as having occurred due to the AI systems.
Thumbnail Image

Australia Tells AI Chatbot Companies to Detail Child Protection Steps

2025-10-22
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has been linked to harms to children (exposure to sexual content, self-harm encouragement). However, the article primarily reports on regulatory actions demanding safety measures and disclosures from companies, rather than describing a new or ongoing AI Incident or a direct harm event. The harms are known or alleged from past interactions, and the regulator's actions aim to prevent further harm. Therefore, this is best classified as Complementary Information, as it provides important context on governance and safety responses to AI-related risks rather than reporting a new incident or hazard.
Thumbnail Image

Australia tells AI chatbot companies to detail child protection steps

2025-10-23
The Hindu
Why's our monitor labelling this an incident or hazard?
The presence of AI systems (chatbots with realistic conversational abilities) is explicit. The harms described include exposure to sexual content, encouragement of self-harm, and emotional dependency, which are direct harms to health and well-being of minors. The article references a lawsuit linked to a suicide following interaction with an AI companion, confirming realized harm. The regulator's actions to compel safety disclosures and impose fines further confirm the seriousness of the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

There is little evidence AI chatbots are 'bullying kids' - but this doesn't mean these tools are safe

2025-10-22
The Conversation
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) and discusses their use and potential misuse. It references some tragic cases where AI chatbots allegedly contributed to harm (suicide), which constitutes harm to persons. However, these are isolated cases and not established as a widespread or autonomous AI Incident of bullying. The article mainly reviews concerns, regulatory responses, and the current lack of evidence for systemic AI harm. Therefore, it does not describe a confirmed AI Incident or a clear AI Hazard but rather provides complementary information about ongoing societal and governance responses and the evolving understanding of AI chatbot risks.
Thumbnail Image

Australia tells AI chatbot companies to detail child protection steps

2025-10-22
ThePrint
Why's our monitor labelling this an incident or hazard?
The AI systems (chatbots) are explicitly mentioned and are involved in interactions with minors that have directly led to harm, including emotional dependency, sexual conversations, and a reported suicide. The regulatory demand for safety measures is a response to these harms. Therefore, this event qualifies as an AI Incident because the AI systems' use has directly or indirectly led to significant harm to individuals (harm to health and well-being of minors). The focus is on the harms caused and the regulatory response, not merely potential future harm or general AI news.
Thumbnail Image

Australia Orders AI Chatbot Firms to Protect Children from Harmful Content | Technology

2025-10-22
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) and concerns about their potential to cause harm to children, which aligns with the definition of AI systems and possible harms. However, the main content is about the regulatory notices issued to companies and the demand for safety measures, which is a governance response. While it references a past incident (the US lawsuit), the article itself does not report a new AI Incident or direct harm event but rather a policy and enforcement action. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-related risks involving child safety.
Thumbnail Image

Big social media platforms all covered by Australian social ban: eSafety Commissioner | Biometric Update

2025-10-23
Biometric Update
Why's our monitor labelling this an incident or hazard?
While AI systems such as chatbots and biometric age estimation tools are involved, the article does not report a realized harm or incident caused by these AI systems. Instead, it details regulatory scrutiny, warnings, and potential future enforcement actions aimed at preventing harm. This fits the definition of Complementary Information, as it provides important context on governance responses and societal measures addressing AI-related risks, without describing a concrete AI Incident or AI Hazard event.
Thumbnail Image

Australia orders AI chatbot companies to detail child protection measures | News.az

2025-10-22
News.az
Why's our monitor labelling this an incident or hazard?
The involvement of AI chatbot systems is explicit, as these systems interact with children and have been linked to harmful outcomes, including a reported suicide. The harms fall under injury or harm to health (mental health and risk of suicide) and potential exploitation of minors. Since the article reports on actual harms and regulatory actions in response to these harms, this qualifies as an AI Incident. The regulatory measures and inquiries are responses to existing incidents rather than mere potential risks or general AI news, so the classification is AI Incident.
Thumbnail Image

ESafety Mandates AI Chatbots Ensure Aussie Kids' Safety

2025-10-22
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use could plausibly lead to significant harms to children, including exposure to sexually explicit content and encouragement of self-harm or suicide. Although no specific harm event is reported as having occurred, the legal notices and regulatory actions indicate a credible risk of such harms. The focus is on preventing these harms through compliance and safety measures, fitting the definition of an AI Hazard. It is not Complementary Information because the main narrative is not about responses to a past incident but about addressing potential future harms. It is not an AI Incident because no direct or indirect harm has been reported as having occurred yet.