
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A dossier by the Meter Foundation reveals that nearly 3,000 Italian children were victimized in six months by AI systems used to generate deepfake child sexual abuse material and facilitate online grooming via chatbots. These AI-driven abuses, mainly on platforms like Signal, have caused severe harm to minors’ rights and dignity.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots and deepfake image generation) being used to manipulate and exploit minors, causing direct harm including privacy violations, reputational damage, and facilitating child sexual abuse material. The involvement of AI in producing and distributing harmful content and in emotionally manipulating children is central to the harm described. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI system use.[AI generated]