AI-Generated Deepfake Pornography Targets Susanna Griso and Minors in Spain

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems were used to create and disseminate manipulated pornographic images of TV presenter Susanna Griso and numerous minors in Spain, violating their rights and causing emotional harm. The deepfake images, generated without consent, highlight the growing misuse of AI for sexual exploitation and privacy violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated manipulated images used to create fake pornographic content of Susanna Griso and others, which is a clear violation of personal rights and causes harm to the individuals. The AI system's use in creating these images is central to the harm described. This fits the definition of an AI Incident as it involves violations of human rights and harm to individuals caused directly by the use of AI systems.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Susanna Griso, víctima de un montaje pornográfico creado por IA

2023-09-21
okdiario.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated images used to create fake pornographic content of Susanna Griso and others, which is a clear violation of personal rights and causes harm to the individuals. The AI system's use in creating these images is central to the harm described. This fits the definition of an AI Incident as it involves violations of human rights and harm to individuals caused directly by the use of AI systems.
Thumbnail Image

Susanna Griso, víctima también de la pornografía a través de la IA

2023-09-20
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images of public figures without their consent, which is a clear violation of personal rights and causes harm to the individuals. The article explicitly mentions the use of AI to create these fake images and the resulting harm to the victims. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Susanna Griso fue víctima de la inteligencia artificial en ARCO: "Aprovechaban los 'frames' en los que teníamos la boca abierta"

2023-09-20
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create manipulated explicit images without consent, which is a violation of human rights and personal dignity. This constitutes harm under the framework's category (c) violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as victims have been identified and the manipulations have been publicly discussed. Although the perpetrators are minors and legal action is limited, the AI's role in causing harm is direct and pivotal. Therefore, this qualifies as an AI Incident. The article also includes commentary on legal and societal responses, but the primary focus is on the harm caused by AI misuse.
Thumbnail Image

Susanna Griso denuncia que fue víctima de un montaje pornográfico con IA: "Aprovechaban para hacer una felación"

2023-09-20
eldiario.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake nude images of minors and public figures, which are then disseminated for sexual blackmail and pornography. This involves AI systems generating manipulated content that directly harms individuals' rights and dignity, constituting violations of human rights and harm to communities. The harm is realized, not just potential, as victims have been identified and the content has been distributed. Hence, this is an AI Incident.
Thumbnail Image

Susanna Griso confiesa que fue víctima de un montaje pornográfico: ''Aprovechaban para hacer una felación''

2023-09-20
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic deepfake videos (montajes pornográficos hechos con Inteligencia Artificial) involving the presenter, which is a clear example of an AI system's use leading to harm. This harm includes violation of personal rights and dignity, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The event is not merely a potential risk but a realized harm as the presenter herself experienced it, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Susanna Griso, víctima de un montaje pornográfico con IA: "Aprovechaban para hacer una felación"

2023-09-20
Antena3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic montages simulating nude images of minors and public figures, which have been disseminated, causing harm. This constitutes a violation of rights and harm to communities. The involvement of AI in generating these images and the resulting harm qualifies this as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Susanna Griso, víctima de un montaje pornográfico

2023-09-20
Semana
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated pornographic images (deepfakes) of Susanna Griso without her consent, which is a direct violation of her rights and causes personal harm. The AI system's use here directly led to harm (emotional distress, violation of privacy and rights). This fits the definition of an AI Incident as it involves harm to a person and violation of rights due to AI misuse. The event is not merely a potential risk or a general discussion but a realized harm involving AI-generated content.