Spain Fines Minors for AI-Generated Sexual Images of Adolescents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Almendralejo, Spain, minors used AI deepfake technology to create and disseminate fake nude images of adolescent girls without consent. The Spanish Data Protection Agency fined those responsible, marking the country's first sanction for AI-generated sexual content involving minors and highlighting the legal and ethical risks of AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI-generated nude images, indicating the use of an AI system to create manipulated content. The dissemination of these images without consent caused violations of personal rights and moral harm, fitting the definition of an AI Incident under violations of human rights and harm to communities. The sanction by the data protection agency and judicial rulings further confirm the harm's occurrence and the AI system's role in causing it. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Protección de Datos sanciona con 1.200 euros la difusión de desnudos generados con IA

2025-11-06
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated nude images, indicating the use of an AI system to create manipulated content. The dissemination of these images without consent caused violations of personal rights and moral harm, fitting the definition of an AI Incident under violations of human rights and harm to communities. The sanction by the data protection agency and judicial rulings further confirm the harm's occurrence and the AI system's role in causing it. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Protección de datos impone la primera sanción en Europa por un desnudo falso generado por IA

2025-11-06
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and disseminated unlawfully, causing harm to the victims (minors) in terms of privacy, personal data protection, and moral integrity. The sanction imposed by the AEPD confirms the recognition of harm caused by the AI system's misuse. This fits the definition of an AI Incident because the AI system's use directly led to violations of fundamental rights and harm to individuals. The event is not merely a potential risk or a complementary update but a concrete case of harm and legal response.
Thumbnail Image

Spain issues fine for AI-generated sexual images of minors

2025-11-07
Economic Times
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sharing of AI-generated sexual images of minors using real faces, which is a clear violation of data protection and fundamental rights. The involvement of AI in generating manipulated images that caused harm through dissemination is explicit. The legal fine confirms that harm has occurred and the AI system's use was central to this harm. Therefore, this qualifies as an AI Incident due to the realized violation of rights and harm caused by the AI-generated content.
Thumbnail Image

Spain issues fine for AI-generated sexual images of minors

2025-11-06
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated sexual images involving real minors' faces, which is a clear violation of data protection laws and a breach of fundamental rights. The dissemination of such content causes harm to the individuals depicted and violates legal protections. The involvement of AI in generating and distributing this harmful content directly led to the incident and the subsequent fine, fitting the definition of an AI Incident due to violation of rights and harm to individuals.
Thumbnail Image

Sentencia histórica en España por desnudos mediante IA: 2000 euros de sanción por generar imágenes que vulneran la intimidad de mujeres

2025-11-06
MARCA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate manipulated nude images (deepfakes) without consent, which constitutes a violation of privacy and data protection rights. The sanction imposed by the AEPD confirms that harm has occurred due to the AI system's use. The involvement of minors as victims further underscores the severity of the harm. Since the AI system's use directly led to a breach of fundamental rights and personal harm, this is classified as an AI Incident under the framework.
Thumbnail Image

Sanción pionera en Europa: Protección de Datos multa con 2.000 euros un desnudo falso generado con IA

2025-11-06
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake generation) to create and distribute harmful content (fake nude images of a minor), which directly leads to violations of personal data rights and likely causes psychological and social harm to the victims. The regulatory sanction confirms the harm and legal breach. Since the AI system's use has directly led to realized harm (violation of rights and dissemination of illegal content), this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Spain Issues Fine for AI-Generated Sexual Images of Minors

2025-11-06
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated sexual images involving real minors' faces, which is a clear violation of data protection and fundamental rights. The dissemination of such content causes harm to the individuals depicted and breaches legal protections. The involvement of AI in generating and distributing this harmful content qualifies this as an AI Incident under the definitions provided, as it directly led to violations of rights and harm to individuals.
Thumbnail Image

Usar una app de IA para desnudar a una compañera de instituto tiene precio: 2.000 euros de multa para los padres del menor

2025-11-06
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ClothOff) to create deepfake images without consent, which were then distributed, causing harm to the victims' privacy and dignity. This meets the definition of an AI Incident because the AI system's use directly led to violations of fundamental rights and harm to individuals. The imposition of fines and judicial sentences further confirms the materialization of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Multa pionera por la difusión de desnudos creados con IA: uno de los menores de Almendralejo, sancionado con 2.000 euros

2025-11-06
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create deepfake nude images without consent, which were then disseminated by minors, causing harm to the individuals depicted. This constitutes a violation of human rights and moral integrity, fitting the definition of harm (c) under AI Incident. The AI system's use directly led to the harm, and legal sanctions have been imposed, confirming the realized harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Multa pionera de 2.000 euros a los padres de un menor que creó desnudos falsos de una compañera con IA

2025-11-06
El Periódico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate deepfake images, which constitutes an AI system's use leading to harm—specifically, violations of personal data and privacy rights. The harm has materialized as the images were distributed, and legal action has been taken, including fines and judicial measures. This meets the criteria for an AI Incident because the AI system's use directly led to a breach of fundamental rights and legal obligations. The sanction by the AEPD confirms the recognized harm and legal response.
Thumbnail Image

Multa de 2.000 euros a los padres de un menor que creó una imagen de una compañera desnuda con inteligencia artificial

2025-11-06
eldiario.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create manipulated images (deepfakes) of minors without consent, which were then distributed, causing harm to the victims' privacy and dignity. This fits the definition of an AI Incident as the AI system's use directly led to violations of fundamental rights (privacy and protection of minors) and harm to communities (psychological and social harm). The imposition of a fine by a regulatory authority further confirms the recognition of harm. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

Spain issues fine for AI-generated sexual images of minors

2025-11-06
ThePrint
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexual images of minors, which is a clear violation of laws protecting minors and data privacy. The sharing of such content causes harm to individuals and society, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The fine imposed by the Spanish agency confirms that harm has occurred and the AI system's role is pivotal in this incident.
Thumbnail Image

Protección de datos impone una sanción pionera en Europa: 2.000 euros por un "deepfake"

2025-11-06
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create deepfake images, which are manipulated content generated by AI. The creation and dissemination of these images caused direct harm to the minors involved, violating their rights and causing moral harm. The legal and regulatory responses confirm the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

Un adolescente creó una imagen de su compañera desnuda usando IA. A sus padres les ha caído una multa de 2.000 euros

2025-11-06
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create deepfake images that directly harmed individuals by violating their privacy and personal rights, particularly of minors. The creation and dissemination of these AI-generated fake images led to legal sanctions and judicial actions, indicating realized harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and personal data protection, which are harms under the framework. Therefore, the classification is AI Incident.
Thumbnail Image

España sanciona con 1,200 euros la difusión de desnudos generados con IA

2025-11-06
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated nude images, indicating the use of an AI system to create manipulated content. The dissemination of these images without consent constitutes a violation of human rights and privacy, fulfilling the criteria for harm under the AI Incident definition. The sanction and judicial procedures confirm that harm has occurred due to the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Multa de 1.200 euros por difundir contenido sexual generado por IA

2025-11-06
HERALDO
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to manipulate images to create sexual content involving real persons, including minors, which were then disseminated. This led to legal consequences and sanctions by the Spanish Data Protection Agency and the Juvenile Court, indicating direct harm to the individuals involved, including violations of their rights and moral integrity. The AI system's use was central to the harm caused, fulfilling the criteria for an AI Incident under violations of human rights and moral integrity.
Thumbnail Image

Protección de Datos sanciona al autor de la difusión de contenido sexual generado con IA con 1.200 euros

2025-11-06
HERALDO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexual images, where AI was used to create nude bodies combined with real faces, leading to violations of rights and moral harm. The dissemination of these images without consent caused direct harm to the individuals depicted, fulfilling the criteria for harm to communities and violations of rights under the AI Incident definition. The sanction by the data protection authority and judicial measures further confirm the harm and AI system involvement. Hence, this is classified as an AI Incident.
Thumbnail Image

Protección de Datos interpone la primera sanción en Europa por generar contenido falso con IA: 2.000 euros de multa por crear desnudos con inteligencia artificial

2025-11-06
El HuffPost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (deepfake technology) to create fraudulent and harmful content involving minors, which has led to legal sanctions. This constitutes a direct harm to individuals' rights and well-being, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the resulting legal consequences confirm this classification. The event is not merely a potential risk or a complementary update but a concrete incident with realized harm and regulatory response.
Thumbnail Image

Multa pionera en Europa por un 'deepfake' sexual: la AEPD sanciona con 2.000 euros el caso de Almendralejo

2025-11-06
Antena3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake images without consent, which were disseminated causing harm to the individuals depicted. The harm includes violations of personal data rights and the unlawful treatment of images, which fits the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The sanction by the AEPD confirms the harm and legal recognition of the incident. Although the AI application itself was not held responsible, the use of AI-generated content directly led to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Protección de Datos sanciona con 2.000 euros por crear y difundir imágenes de menores desnudas hechas con IA

2025-11-06
Público.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create deepfake images of minors, which constitutes a violation of personal data protection and privacy rights, a breach of applicable law protecting fundamental rights. The sanction confirms that harm has occurred through the unauthorized dissemination of these images. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and harm to individuals (minors).
Thumbnail Image

Historic Fine Imposed for AI-Generated Images of Minors | Technology

2025-11-06
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create sexual images of minors using real faces, which is a clear violation of human rights and data protection laws. The AI system's development and use directly led to this harm, resulting in legal penalties. Therefore, this qualifies as an AI Incident due to the realized harm involving rights violations and legal consequences.
Thumbnail Image

Multa de 1.200 euros a un menor por la difusión de contenidos sexual generado con IA

2025-11-06
telecinco
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to manipulate images of minors to create sexualized content without consent, which were then distributed, causing harm. This meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to individuals. The legal and regulatory actions taken, including fines and judicial measures, confirm the recognition of harm caused by the AI-generated content. Hence, the event is classified as an AI Incident.
Thumbnail Image

Primera multa de Europa en Badajoz por la difusión de imágenes falsas de menores desnudas generadas por IA

2025-11-06
La Nueva España Digital - LNE.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images of minors, which were disseminated causing harm to the victims' rights and dignity. The sanction by the AEPD is a response to this harm. The AI system's use directly led to violations of personal rights and the distribution of harmful content. The event meets the criteria for an AI Incident as it involves realized harm caused by the use of an AI system.
Thumbnail Image

Protección de Datos multa un desnudo falso generado con IA

2025-11-07
La Opinion A Coruña - laopinioncoruna.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (fake nude images) that caused harm by violating privacy and personal rights, leading to legal sanctions. The AI system's use in generating and spreading these images directly caused harm to the individuals involved, meeting the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

Almendralejo, en el foco, tras la primera sanción de Protección de datos por la difusión de desnudos de menores hechos con IA

2025-11-06
El Periódico Extremadura
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake nude images of minors, which is a direct use of AI systems (deepfake technology). The dissemination of these images has caused harm to the individuals involved, including violations of personal data and privacy rights, which are protected under applicable law. The imposition of a fine by the data protection authority confirms the recognition of harm and legal breach. Although the penal case concluded with educational measures, the administrative sanction reflects ongoing harm and legal response. Hence, this is an AI Incident as the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

AEPD impone sanción por difusión contenido sexual generado con IA en Almendralejo

2025-11-06
Región Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate manipulated sexual images of minors, causing violations of fundamental rights and moral integrity, which are harms under the AI Incident definition. The dissemination of such content has led to legal sanctions and criminal responsibility, confirming realized harm. The AI system's use in creating and spreading this content is central to the incident, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Spain Fines Individual for Sharing AI-Generated Sexual Images of Minors

2025-11-06
Head Topics
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content that caused harm by violating privacy and rights of minors, a vulnerable group. The dissemination of such harmful AI-generated images constitutes a clear breach of legal and ethical standards, resulting in realized harm. The involvement of the AI system in creating the images and the subsequent legal action confirms the direct link to harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Multa de 2.000 euros al menor que falseó desnudos con IA en Almendralejo

2025-11-06
Radio Interior
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated images without consent, which were then shared on social media, causing harm to the privacy and rights of minors. The sanction by the AEPD confirms the harm and legal recognition of the violation. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals (minors) through the creation and dissemination of false nude images. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Sanción pionera: Protección de Datos multa la difusión de menores desnudas con IA

2025-11-06
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that misuse the likeness of minors, leading to the dissemination of illegal and harmful content. The AI system's use in generating these images directly caused harm to the minors' rights and moral integrity, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The legal sanctions and investigations confirm the harm has materialized and the AI system's role is pivotal in causing it.
Thumbnail Image

Proteger contra las manipulaciones con IA | Editorial

2025-11-08
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The article describes a concrete event where AI-generated manipulated images of minors were created and distributed, leading to legal sanctions and recognized harm to the victims. The involvement of generative AI in producing deepfakes is explicit, and the harm includes violation of rights and potential psychological and reputational damage to the minors. This fits the definition of an AI Incident because the AI system's use directly led to harm and legal consequences. The article also discusses regulatory responses and challenges but the primary focus is on the realized harm from the AI misuse.
Thumbnail Image

Difundir un desnudo (aunque no sea real) no sale gratis: España sanciona al difusor de una imagen sexual hecha con IA

2025-11-07
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate and disseminate non-consensual sexual images (deepfakes), which constitutes a violation of personal data and privacy rights, a recognized harm under the framework. The sanction by the data protection authority confirms that harm has occurred and that the AI system's misuse was pivotal. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Descubre el horror de una familia víctima de imágenes de desnudos creadas con IA: una niña de 14 años resultó afectada

2025-11-07
infobae
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create manipulated nude images of minors, which were then disseminated, causing direct psychological harm and violation of rights to the victims. The AI system's use directly led to harm (psychological and social) to individuals, fulfilling the criteria for an AI Incident. The legal sanction and societal response are complementary but do not change the primary classification. Therefore, this event is best classified as an AI Incident due to the direct harm caused by AI-generated content.
Thumbnail Image

Un pueblo de 34.000 habitantes de Badajoz enseña a Europa cómo actuar contra los delitos sexuales a menores fomentados por la IA

2025-11-07
3D Juegos
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create manipulated sexual images of minors (deepfakes) using real faces, which is a direct violation of fundamental rights and data protection laws. The harm is realized as the images were generated and distributed, constituting a breach of rights and potentially causing harm to the individuals depicted. The involvement of AI in generating the illicit content and the resulting legal sanction confirm the direct link to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La AEPD impone la primera sanción en Europa por difundir contenido sexual generado con IA

2025-11-07
Valencia Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate manipulated sexual images, which were then disseminated causing harm to minors' privacy and dignity. The sanction imposed by the AEPD confirms the legal recognition of harm caused by AI misuse. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to individuals. The event is not merely a potential risk or complementary information but a realized harm with legal consequences.
Thumbnail Image

Primera multa en España a unos padres por los desnudos de compañeras de instituto creados con inteligencia por su hijo - lavozdelsur.es

2025-11-07
lavozdelsur.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate fake nude images of minors without consent, which were then disseminated, causing harm to the victims' rights and privacy. The administrative sanction confirms the harm and legal breach. The AI system's role is pivotal in creating the harmful content. Therefore, this is an AI Incident due to realized harm (violation of rights and unauthorized data processing) directly linked to the AI system's use.
Thumbnail Image

Multan a unos padres porque su hijo difundió imágenes sexuales de compañeras generadas por IA

2025-11-07
Artículo 14
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating sexual images without consent, which were then distributed, causing harm to the victims' dignity, privacy, and rights. The involvement of the Spanish Data Protection Agency and the legal sanctions confirm that harm has materialized. The AI system's use directly led to violations of fundamental rights and harm to individuals and communities, fitting the definition of an AI Incident. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

Yapay zekanın ne kadar tehlikeli olduğunun kanıtı: Küçük çocuğun yaptıklarına inanamayacaksınız

2025-11-07
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to create and disseminate fake nude images of minors, which is a clear violation of rights and causes harm to the individuals depicted. The involvement of AI in generating the harmful content and the resulting legal penalties confirm that the AI system's use directly led to harm. This fits the definition of an AI Incident as it involves violations of human rights and harm to individuals through the use of AI.
Thumbnail Image

İspanya, Yapay Zeka ile Oluşturulan Sahte Görseller İçin Para Cezası Verdi

2025-11-07
Haberler
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create fake images of minors, which were then shared unlawfully, causing harm to the individuals depicted and violating their rights. The involvement of AI in generating the harmful content and the resulting legal consequences demonstrate direct harm linked to the AI system's use. This meets the criteria for an AI Incident as defined, involving violations of human rights and harm to individuals through AI-generated content.
Thumbnail Image

İspanya'da çocukların cinsel içerikli yapay zeka kullanımı nedeniyle ebevenlere para cezası kesildi | Dünya Haberleri

2025-11-07
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create fake nude images of underage girls, which were then shared without consent, constituting a violation of rights and causing harm to the children involved. The involvement of AI in generating harmful content that led to legal sanctions and social harm meets the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use is central to the incident.
Thumbnail Image

İspanya'da yapay zeka ile yapılan cinsel içerikli görsel paylaşan çocuğun ailesine para cezası

2025-11-07
KIBRIS POSTASI
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate fake nude images of minors, which were then shared illegally, causing harm to the children depicted and their communities. The legal actions and fines imposed are a direct consequence of the AI-generated content's harmful impact. This meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to individuals and communities. The harm is realized, not just potential, and the AI system's role is pivotal in the incident.
Thumbnail Image

Bir ilk: Cinsel içerikli yapay zeka kullanımına para cezası - Diken

2025-11-07
Diken
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create sexual content involving a minor, which was then shared illegally, causing harm and legal consequences. The AI system's use directly led to violations of laws protecting minors and their rights, fulfilling the criteria for an AI Incident under violations of human rights and legal protections. The imposition of fines and court rulings confirms that harm has materialized and is linked to the AI system's use.
Thumbnail Image

AB'de bir ilk: Bir çocuğa 'ahlaka aykırı' gerekçesiyle ceza kesildi

2025-11-09
Haber7.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create fake nude images of minors, which were then shared without consent, causing harm to the individuals depicted and violating their rights. The involvement of AI in generating the harmful content and the subsequent legal penalties demonstrate direct harm caused by the AI system's use. This meets the criteria for an AI Incident as defined, involving violations of rights and harm to individuals through AI-generated content.