AI-Generated Deepfake Abuse of Minors in Italy Exposed by Meter Dossier

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A dossier by the Meter Foundation reveals that nearly 3,000 Italian children were victimized in six months by AI systems used to generate deepfake child sexual abuse material and facilitate online grooming via chatbots. These AI-driven abuses, mainly on platforms like Signal, have caused severe harm to minors’ rights and dignity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (chatbots and deepfake image generation) being used to manipulate and exploit minors, causing direct harm including privacy violations, reputational damage, and facilitating child sexual abuse material. The involvement of AI in producing and distributing harmful content and in emotionally manipulating children is central to the harm described. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI system use.[AI generated]
AI principles
SafetyRespect of human rightsRobustness & digital securityAccountabilityHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

L'IA spoglia i bambini: la nuova frontiera della pedofilia passa per chatbot e deepfake - insalutenews.it

2025-06-24
insalutenews.it
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots and deepfake image generation) being used to manipulate and exploit minors, causing direct harm including privacy violations, reputational damage, and facilitating child sexual abuse material. The involvement of AI in producing and distributing harmful content and in emotionally manipulating children is central to the harm described. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI system use.
Thumbnail Image

Meter, bambini e adolescenti sempre più a rischio - Vatican News

2025-06-23
vaticannews.va
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI for image manipulation and chatbot interactions) that have directly led to significant harm to children and adolescents, including violations of their rights and psychological injury. The dossier documents realized harm (AI Incident) through the generation and distribution of AI-enabled child sexual abuse material and manipulation of minors. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused harm to vulnerable individuals.
Thumbnail Image

Associazione Meter. Intelligenza artificiale, nuova frontiera dei pedofili

2025-06-23
Avvenire
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as chatbots and deepfake technology being used to sexually exploit children, which constitutes direct harm to individuals (children) and violations of their rights. The AI systems are actively used to manipulate, deceive, and abuse minors, causing real and significant harm. The harm is ongoing and documented with concrete numbers, making this a clear AI Incident rather than a potential hazard or complementary information. The involvement of AI in the abuse and the resulting harms meet the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Bambini e adolescenti sempre più a rischio

2025-06-23
osservatoreromano.va
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate and manipulate child sexual abuse material, including deepnude and deepfake images, which have directly harmed thousands of minors. This constitutes a clear violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI in producing and spreading illegal and harmful content is direct and central to the harm described. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Zuppi e Baturi incoraggiano don Di Noto a combattere gli usi criminali dell'IA. "Spogliano i bambini con questa tecnologia, non mettee le loro foto su Internet" - FarodiRoma

2025-06-23
FarodiRoma
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots, generative adversarial networks for deepfake image manipulation) being used to cause harm to minors through sexual abuse and exploitation. The harms include violation of human rights, harm to children’s dignity and bodily integrity, and facilitation of illegal content distribution. The AI systems are actively involved in the abuse (use phase) and have directly led to realized harm, meeting the criteria for an AI Incident. The article also discusses legislative and societal responses, but the primary focus is on the ongoing harm caused by AI misuse.
Thumbnail Image

leggi tutto

2025-06-24
csvnapoli.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as chatbots and deepfake generation software being used to manipulate minors and produce harmful content. The harms include psychological and reputational damage to children, violations of privacy, and the facilitation of child sexual abuse material, which are serious human rights violations and harms to individuals. The AI systems' development and use have directly led to these harms. The presence of encrypted platforms complicates law enforcement efforts but does not negate the AI systems' role in causing harm. Hence, this qualifies as an AI Incident under the OECD framework.