AI-Generated Sexualized Images of Sara Sálamo Spark Outrage in Spain

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Spanish actress Sara Sálamo publicly denounced the use of Grok, an AI system on X.com, to generate sexualized images of her without consent. The incident highlights the misuse of AI to manipulate and objectify women, causing reputational harm and violating personal rights. The controversy has prompted calls for stronger controls.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system Grok is explicitly mentioned as generating sexualized images without consent, directly leading to harm in the form of violation of personal rights and sexual objectification. The event involves the use of AI to create manipulated content that harms individuals, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is ongoing and realized, not merely potential, and the AI system's role is pivotal in causing this harm.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyFairnessAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La actriz Sara Sálamo, pareja de Isco, levanta la voz al ser sexualizada en una foto generada con IA: "Sin tu consentimiento"

2026-01-02
MARCA
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized images without consent, directly leading to harm in the form of violation of personal rights and sexual objectification. The event involves the use of AI to create manipulated content that harms individuals, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is ongoing and realized, not merely potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

Mujeres en bikini sin permiso: la peligrosa moda viral que usa Grok para sexualizar imágenes con IA - lavozdelsur.es

2026-01-02
lavozdelsur.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Grok, an AI system, to generate sexualized images of women without their consent, which is a misuse of AI technology. The harm is realized and ongoing, including violations of rights and public humiliation, which fits the definition of an AI Incident. The involvement of the AI system is direct in the generation of harmful content, and the harm includes violation of rights and harm to communities. The event is not merely a potential risk or a complementary update but a clear case of harm caused by AI misuse.
Thumbnail Image

La actriz Sara Sálamo, pareja de Isco, explota tras ser sexualizada por esta fotografía generada con IA

2026-01-04
Diario Sport
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate a manipulated image that sexualized Sara Sálamo without her consent. This manipulation is a direct use of AI leading to harm, specifically a violation of her rights and personal dignity, which is a recognized form of harm under the framework. The event describes realized harm caused by the AI system's outputs, not just a potential risk. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Sara Sálamo estalla tras ser sexualizada en esta fotografía generada con IA: "Esto no va de tecnología sino de poder"

2026-01-04
20minutos.es - Últimas Noticia
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated image of a person without consent, sexualizing her and modifying her image and body. This is a clear violation of personal rights and can be classified as harm under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the actress publicly denounces the impact on her and her family. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Sara Sálamo alza la voz tras ser sexualizada con IA: "Lo grave no es la imagen falsa, es lo poco que escandaliza"

2026-01-04
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated images that have been used to sexualize and objectify the actress without her consent. This misuse of AI has directly led to harm in the form of violation of her rights and harm to her reputation and dignity. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in creating the false images. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sara Sálamo estalla tras ser sexualizada con una imagen creada con inteligencia artificial: "Esto no va de tecnología sino de poder"

2026-01-05
El Mundo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated image of a person in a sexualized manner without consent, which is a clear violation of rights and constitutes harm to the individual. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the person). The event is not merely a potential risk or a general discussion about AI technology but a realized harm caused by AI-generated content.