AI Misuse Drives Surge in Child Sexual Abuse Content Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2025, the Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material online, with over 8,000 images and videos identified. Most videos were classified as the most severe under UK law, highlighting AI's role in producing increasingly extreme and realistic illegal content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use and misuse of AI systems (generative AI models) to produce illegal and harmful content (CSAM), which constitutes a violation of human rights and causes significant harm to children and communities. The AI systems' outputs have directly led to the dissemination of harmful material, fulfilling the criteria for an AI Incident. The article also references ongoing harm and the need for regulatory responses, confirming that the harm is realized and ongoing rather than merely potential.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Amount of AI-generated child sexual abuse material found online surged in 2025

2026-03-24
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models) to produce illegal and harmful content (CSAM), which constitutes a violation of human rights and causes significant harm to children and communities. The AI systems' outputs have directly led to the dissemination of harmful material, fulfilling the criteria for an AI Incident. The article also references ongoing harm and the need for regulatory responses, confirming that the harm is realized and ongoing rather than merely potential.
Thumbnail Image

AI fuels surge in child abuse content, new report finds

2026-03-24
Euronews English
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate illegal and harmful child sexual abuse content, which constitutes a violation of fundamental human rights and legal protections. The article documents actual harm occurring through the creation and distribution of AI-generated CSAM, with detailed evidence of the scale and severity of the content. The AI systems' use is central to the harm, as they enable the production of more explicit and complex abusive material than before. This meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm to individuals and communities, specifically children subjected to sexual exploitation and abuse.
Thumbnail Image

AI-generated child sexual abuse videos up 260-fold in a year, watchdog finds

2026-03-24
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI systems are being used to produce child sexual abuse videos, which is illegal and causes direct harm to children and society. The AI's involvement is central to the incident, as it lowers the barrier to creating and distributing such content, increasing both quantity and severity. This meets the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The report also mentions regulatory responses, but the primary focus is on the realized harm caused by AI misuse, not just potential or complementary information.
Thumbnail Image

Amount of AI-enabled child sexual abuse imagery increased in 2025 - report

2026-03-24
Rappler
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI-generated CSAM has increased and is being distributed on various platforms, causing direct harm to individuals and communities by perpetuating child sexual abuse imagery. The AI systems' development and use have directly led to violations of fundamental rights and legal protections, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves sophisticated AI-generated content that is illegal and harmful.
Thumbnail Image

AI-generated child sexual abuse material rises 14% in 2025: Safety watchdog

2026-03-24
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, as the content is AI-generated. The use of AI to create realistic CSAM directly leads to harm to children, a severe violation of human rights and legal protections. This constitutes an AI Incident because the AI system's use has directly led to significant harm (child sexual abuse material dissemination), fulfilling the criteria for harm to persons and violation of rights under the definitions provided.
Thumbnail Image

AI misuse drives alarming surge in online child abuse content in 2025

2026-03-24
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI misuse has directly led to a large increase in the creation and dissemination of child sexual abuse content, which is a severe harm to individuals and a violation of human rights. The AI systems are used to generate realistic images and videos of abuse, which is a direct cause of harm. The involvement of AI is clear and central, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

AI child abuse images surge as watchdog warns of criminal misuse

2026-03-24
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create child sexual abuse material, which is illegal and causes direct harm to children and survivors. The involvement of AI in producing and distributing this content is central to the incident. The harms include violations of human rights, criminal offenses, and significant societal harm. The event meets the criteria for an AI Incident because the AI system's use has directly led to realized harm, including the creation and spread of illegal and harmful content. The article also references regulatory responses and the need for safety-by-design approaches, but the primary focus is on the realized harm caused by AI misuse.