
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
In 2025, the Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material online, with over 8,000 images and videos identified. Most videos were classified as the most severe under UK law, highlighting AI's role in producing increasingly extreme and realistic illegal content.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models) to produce illegal and harmful content (CSAM), which constitutes a violation of human rights and causes significant harm to children and communities. The AI systems' outputs have directly led to the dissemination of harmful material, fulfilling the criteria for an AI Incident. The article also references ongoing harm and the need for regulatory responses, confirming that the harm is realized and ongoing rather than merely potential.[AI generated]