AI-Generated Deepfake Nudes of Children Prompt Parental Caution Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The rise of AI-powered apps that generate fake nude images from children's photos has led to psychological harm and privacy violations, prompting many parents to stop sharing their children's images online. These AI incidents have caused trauma to victims and spurred legal and social responses to protect minors.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (generative AI applications) being used to create non-consensual deepfake nude images of children, which constitutes a violation of rights and causes harm to individuals. The harms are realized and ongoing, including trauma to victims and privacy breaches. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities (children and their families).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingConsumer servicesDigital security

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rightsReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Por qué la IA debería hacer que los padres reconsideren publicar fotos de sus hijos

2025-08-13
La Nacion
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI applications) being used to create non-consensual deepfake nude images of children, which constitutes a violation of rights and causes harm to individuals. The harms are realized and ongoing, including trauma to victims and privacy breaches. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities (children and their families).
Thumbnail Image

La IA aumenta los riesgos de publicar las fotos de tus hijos en internet

2025-08-15
The New York Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—AI generative models used to create fake nude images. The harm is realized, as victims suffer trauma equivalent to that caused by real photos being shared without consent, which constitutes harm to persons and communities. The article describes the use of these AI systems leading directly to violations of privacy and dignity, fitting the definition of an AI Incident. Although there is mention of legal frameworks, the continued proliferation and use of these apps indicate ongoing harm rather than just potential harm or complementary information.
Thumbnail Image

Publicar fotos de sus hijos en internet: la advertencia que deja la IA a los padres

2025-08-13
Prensa Libre
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI generative systems that create fake nude images without consent, causing real harm to victims, including children. This constitutes a violation of rights and harm to individuals and communities. The AI system's use in this context is central to the harm described, fulfilling the criteria for an AI Incident. The article also discusses legal and societal responses, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

Por qué la IA debería hacer que los padres se replanteen publicar fotos de sus hijos en internet

2025-08-13
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI generative systems used maliciously to create fake nude images of children, causing direct harm to the victims. This constitutes a violation of rights and harm to individuals and communities. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The article also discusses ongoing legal and social responses but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

La IA aumenta los riesgos de publicar las fotos de tus hijos en internet

2025-08-15
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI applications) that create fake nude images without consent, which have been used to harm children and others by spreading these images online. This constitutes a violation of privacy and causes psychological harm, fulfilling the criteria for harm to persons or communities. The harms are realized and ongoing, with examples of victims and legal responses. The AI system's use and misuse directly lead to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cada vez hay menos padres subiendo a la red las fotos del móvil de sus hijos. El culpable tiene nombre y apellidos

2025-08-12
Xataka Móvil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (nudifier apps) that generate fake pornographic images from photos of children, which directly leads to psychological harm to minors. This constitutes a violation of rights and harm to individuals. The misuse of these AI systems has already caused harm, as evidenced by cases in schools and legislative responses. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the use of AI systems in generating harmful content involving children.