AI Deepfake Scandal: Fraudulent Ad Uses Cloned Voice of Emma García

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Telecinco investigation by Silvia Álamo revealed that Emma García's image and artificially cloned voice were misused in a fake clothing ad. The AI-generated video, circulating on social media, constitutes identity theft and fraud, potentially deceiving consumers while violating her fundamental rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI-generated voice and image cloning to create a fraudulent advertisement without the subject's permission. This misuse of AI has directly caused harm by deceiving the public and violating the individual's rights, fitting the definition of an AI Incident. The harm is realized, not just potential, as the fake content is circulating and misleading people.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceTransparency & explainabilityAccountabilitySafetyRobustness & digital security

Industries
Media, social platforms, and marketingConsumer products

Affected stakeholders
ConsumersWomen

Harm types
ReputationalEconomic/PropertyHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Emma García avisa de la nueva estafa generada con IA que circula por redes: "Me reconozco en esa voz. Es muy peligroso"

2025-05-04
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated voice and image cloning to create a fraudulent advertisement without the subject's permission. This misuse of AI has directly caused harm by deceiving the public and violating the individual's rights, fitting the definition of an AI Incident. The harm is realized, not just potential, as the fake content is circulating and misleading people.
Thumbnail Image

Emma García denuncia usurpación de identidad por IA para grabar un falso anuncio: "Esa no soy yo"

2025-05-04
MARCA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning) to create a synthetic voice impersonating a real person without consent, leading to a violation of personal rights and identity theft. The AI system's use directly led to harm (violation of rights and potential reputational damage). Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Emma García, indignada tras ser utilizada en un anuncio falso creado con IA: "Es mi voz, pero no soy yo"

2025-05-04
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated fake video using Emma García's voice and image without consent to promote a scam. This misuse of AI has directly caused harm by enabling fraud and violating personal rights. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Emma García, víctima de un falso anuncio con Inteligencia Artificial: "Es mi voz, pero no soy yo"

2025-05-04
telecinco
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone Emma García's voice and image to produce a fake advertisement for selling products, which she did not authorize. This misuse of AI has directly caused harm by misleading consumers and violating the presenter's rights. The AI system's role is pivotal in enabling this fraudulent impersonation. Therefore, this qualifies as an AI Incident due to realized harm involving identity fraud and potential consumer harm.
Thumbnail Image

Emma García desvela cómo suplantaron su identidad con inteligencia artificial

2025-05-05
La Opinión - El Correo de Zamora
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone Emma García's voice and create a video that misrepresents her involvement in a commercial campaign, which is likely fraudulent. This misuse of AI has directly led to harm by deceiving people into potentially purchasing products under false pretenses, thus harming consumers and the individual's rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in identity fraud and misinformation.
Thumbnail Image

La presentadora Emma García, víctima de una estafa: 'Es muy peligroso'

2025-05-06
El Diario de Ibiza
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate a fake video and voice of Emma García, constituting a misuse of AI technology for identity theft and fraud. This has directly led to harm in terms of violation of rights (image and privacy) and potential fraud against the public. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm through fraudulent impersonation and rights violations.