AI-Generated Deepfake Nudes of 18 Minors Spark Investigation in Almería

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Spanish authorities are investigating the use of the AI application ClothOff to generate fake nude and sexual images of at least 18 underage female students from an Almería institute. The incident, revealed by the provincial cybercrime prosecutor, highlights severe privacy violations and criminal offenses enabled by AI deepfake technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI application to generate realistic nude images of minors, which is a direct violation of their rights and constitutes child pornography under the law. The AI system's use has directly led to harm (violation of rights and dignity of minors), fulfilling the criteria for an AI Incident. The investigation and legal context confirm the harm has occurred, not just a potential risk. The involvement of AI in generating these images is clear and central to the event, and the harms are significant and clearly articulated.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La Fiscalía investiga el desnudo con el uso de la IA de 18 menores en un instituto de Almería

2026-03-17
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI application to generate realistic nude images of minors, which is a direct violation of their rights and constitutes child pornography under the law. The AI system's use has directly led to harm (violation of rights and dignity of minors), fulfilling the criteria for an AI Incident. The investigation and legal context confirm the harm has occurred, not just a potential risk. The involvement of AI in generating these images is clear and central to the event, and the harms are significant and clearly articulated.
Thumbnail Image

La Fiscalía investiga el uso de IA para generar desnudos de 18 menores en un instituto de Almería

2026-03-17
telecinco
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (deepfake technology) to generate realistic nude images of minors, which is a direct violation of their rights and constitutes child pornography. This is a clear case where the AI system's use has directly led to harm (violation of rights and potential psychological harm to the victims). Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The rest of the article discusses ethical and legal debates and tools related to AI but the core event is the investigation of the AI-generated child pornography incident.
Thumbnail Image

La Fiscalía investiga el desnudo con el uso de la IA de 18 menores...

2026-03-17
europa press
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI application to create realistic virtual nudity of minors, which constitutes child pornography. This is a direct harm to the victims' rights and well-being, fulfilling the criteria for an AI Incident. The AI system's use has directly led to the creation of illegal content harming individuals, specifically minors, which is a serious violation of human rights and legal protections. The investigation confirms the harm has occurred, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Investigan imágenes sexuales creadas con IA de 18 menores de un colegio de Almería | Ideal

2026-03-17
IDEAL
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI applications to create realistic fake sexual images of minors, which is a direct violation of rights and potentially criminal under current laws. The harm is realized as these images affect the victims' honor and privacy, and investigations are underway. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El papel de las Fuerzas y Cuerpos de Seguridad para garantizar que la IA respete los derechos fundamentales

2026-03-17
lavozdealmeria.com
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but rather focuses on societal and governance responses to AI challenges, including the deployment of an AI tool for monitoring hate speech and the importance of ethical frameworks and regulation. This fits the definition of Complementary Information, as it provides context, updates, and responses related to AI's impact on society and fundamental rights without describing a new AI Incident or AI Hazard.
Thumbnail Image

Investigan el uso de IA para generar desnudos de 18 menores en un instituto de Almería

2026-03-17
Andalucía Información
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI application to generate realistic deepfake images of minors in a sexualized manner, which is a direct violation of human rights and legal protections against child pornography. The harm is realized and ongoing, as the investigation is active and the AI-generated content has been produced and distributed. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to individuals (minors) and breaches of legal and fundamental rights.
Thumbnail Image

La Fiscalía investiga imágenes de desnudos de 18 menores, generadas con IA, en un instituto de Almería

2026-03-17
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ClothOff) used to generate manipulated sexual images of minors, which is a direct cause of harm to the victims' rights and dignity. The creation and distribution of such AI-generated child sexual abuse material is a criminal offense and constitutes a clear violation of human rights and legal protections. The harm is realized and ongoing, not merely potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Fiscalía investiga imágenes de desnudos hechas con IA que afectan a 18 alumnas menores de edad en un instituto de Almería

2026-03-17
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to create manipulated sexual images of at least 18 minor students, which is a direct violation of their rights and dignity. The harm is realized and significant, involving virtual child pornography and moral harm. The AI system's role is pivotal as it generates the harmful content. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

La Fiscalía investiga imágenes de desnudos con IA de 18 menores en un instituto de Almería

2026-03-17
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI application to generate sexual deepfake images of minors, which is a clear case of AI system use leading to harm. The harms include violations of human rights and legal protections for minors, specifically related to child pornography and moral integrity. Since the harm is realized and under investigation, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Fiscalía investiga un caso de desnudos generados por IA que afecta a 18 alumnas de un instituto de Almería

2026-03-17
Público.es
Why's our monitor labelling this an incident or hazard?
The AI system (ClothOff) is explicitly mentioned as being used to generate sexualized deepfake images of at least 18 underage students, which is a direct violation of their rights and constitutes a criminal offense. The harm is realized and ongoing, involving violations of fundamental rights and moral integrity. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating illegal and harmful content involving minors.
Thumbnail Image

Investigan la generación de imágenes de desnudos de 18 menores con IA en un instituto de Almería

2026-03-17
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to generate sexualized images of minors, which is a direct violation of human rights and constitutes child pornography. The harm is realized and ongoing, as the images have been generated and victims identified. The AI system's role is pivotal as it enables the creation of these manipulated images. Hence, this is an AI Incident under the framework's definition of harm to individuals and violation of rights.
Thumbnail Image

Investigan el desnudo de 18 menores de un instituto de Almería con inteligencia artificial

2026-03-17
lavozdealmeria.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI applications based on deepfake technology to generate nude images of minors, constituting child pornography and violations of fundamental rights. The harm is realized and significant, involving legal and ethical breaches. The AI system's development and use directly caused this harm, fulfilling the criteria for an AI Incident. The investigation and legal response further confirm the materialization of harm rather than a potential risk, ruling out classification as a hazard or complementary information.
Thumbnail Image

El escándalo de la IA en Almería: 18 alumnas víctimas de desnudos falsos

2026-03-17
Diario de Almería
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake applications) used to generate fake sexual images of minors, causing direct harm to the victims' dignity, privacy, and safety, which fits the definition of an AI Incident. The harm is realized and ongoing, with legal investigations and convictions already in place. The AI system's use is central to the harm, as the images would not exist without the AI-generated manipulation. Therefore, this is not a hazard or complementary information but a clear AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

Al menos 18 alumnas menores desnudadas con IA en un instituto de Almería: Fiscalía investiga el caso - lavozdelsur.es

2026-03-18
lavozdelsur.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ClothOff) that generates realistic fake nude images and sexual scenes from real photos. The harm is direct and materialized: at least 18 minors have been victimized by the creation of these images, which is a violation of their rights and constitutes a criminal offense. The AI system's role is pivotal as it automates and facilitates the creation of these images, making the harm widespread and severe. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm to persons (minors) and violations of fundamental rights through the use of AI-generated content.