Non-consensual AI Deepfake Nudes of Influencer Yeri Mua Go Viral

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Influencer Yeri Mua revealed that unidentified individuals used AI to generate and circulate fake explicit images of her, depicting her nude without consent. Spread online by alleged K-pop fans, these deepfake photos have violated her privacy and inflicted emotional distress, highlighting the misuse of AI in non-consensual image manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems generating manipulated intimate images without consent, which is a clear violation of privacy and personal rights. The harm is realized as the victim experiences exposure, harassment, and violation of dignity. The AI system's use in creating and disseminating these images directly led to these harms. This fits the definition of an AI Incident because the AI's role is pivotal in causing the harm, specifically a violation of human rights and dignity (point c).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Yeri Mua es víctima de fraude, filtran imágenes íntimas hechas con IA y ella reacciona

2024-01-25
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating manipulated intimate images without consent, which is a clear violation of privacy and personal rights. The harm is realized as the victim experiences exposure, harassment, and violation of dignity. The AI system's use in creating and disseminating these images directly led to these harms. This fits the definition of an AI Incident because the AI's role is pivotal in causing the harm, specifically a violation of human rights and dignity (point c).
Thumbnail Image

Yeri Mua es víctima de violencia digital; editaron fotos con IA para que parezca desnuda: "Me siento expuesta"

2024-01-25
infobae
Why's our monitor labelling this an incident or hazard?
An AI system was used to manipulate images of Yeri Mua to create non-consensual fake nude photos, which were then spread online. This use of AI directly led to harm by violating her rights and causing emotional distress, fitting the definition of an AI Incident due to violation of human rights and harm to the individual. The event describes realized harm through digital violence and privacy violation caused by AI-manipulated content.
Thumbnail Image

Yeri Mua ya fue víctima de la inteligencia artificial; acusa que le editaron fotos para verla desnuda

2024-01-25
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to edit photos to create fake nude images of Yeri Mua, which were then spread on social media. This use of AI directly led to harm in terms of privacy violation and emotional distress, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized, not just potential, as the images are being actively shared and causing distress.
Thumbnail Image

Ya se sabe quién editó las fotos de Yeri Mua con IA y le recomiendan recurrir a Ley Olimpia

2024-01-26
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI to create and distribute explicit images without consent, which constitutes a violation of rights and causes harm to the individual targeted. The AI system's role in editing the photos is central to the harm caused. Additionally, the threats of violence and harassment linked to this AI-generated content further confirm the presence of harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Difunden fotos "explícitas" de Yeri Mua generadas con Inteligencia Artificial; así reacciona

2024-01-25
El Universal
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated explicit images (deepfakes) of Yeri Mua without her consent. This involves an AI system used maliciously to produce false content that harms the individual's reputation and privacy, constituting a violation of rights and digital violence. The harm is realized and ongoing, as the images are being spread and causing distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating and distributing harmful content.
Thumbnail Image

Yeri Mua denuncia que crearon fotos intimas de ella con IA

2024-01-25
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake intimate photos of Yeri Mua, which are false and intended to damage her image. This is a clear case of AI-generated deepfake content causing harm to an individual's rights and personal dignity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images have been created and spread, causing emotional and reputational damage. Hence, the event is classified as an AI Incident.
Thumbnail Image

Yeri Mua denuncia el uso indebido de IA: "Me siento muy expuesta" | Noticias de México | El Imparcial

2024-01-25
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
An AI system was used to manipulate images of the influencer, creating non-consensual altered content that was spread online. This misuse of AI directly led to harm to the individual's privacy and dignity, fitting the definition of an AI Incident due to violation of rights and harm to the person.
Thumbnail Image

Difunden fotos 'explícitas' de Yeri Mua generadas con IA

2024-01-25
Horacero
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake explicit images (deepfakes) of Yeri Mua without her consent, which is a misuse of AI technology causing harm to her reputation and privacy. This is a clear violation of rights and a form of digital violence, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images are being actively disseminated and causing distress. Therefore, the event is classified as an AI Incident.