AI-Generated Deepfake of Mercedes Milá Used in Fraudulent Health Product Promotion on TikTok

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mercedes Milá was targeted by a deepfake video on TikTok, where AI was used to clone her image and voice to promote a fraudulent diabetes cure. Despite her complaints, TikTok refused to remove the video, which has garnered over half a million views, raising concerns about AI misuse and public harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves an AI system generating manipulated video and audio content (deepfake) impersonating a public figure to spread false health claims, which is a direct violation of rights and a form of harm to communities and individuals. The harm is realized as the video has been viewed over half a million times and promotes a dangerous false remedy. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and identity misuse. The platform's inadequate response further compounds the harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilitySafetyAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnologyConsumer products

Affected stakeholders
ConsumersGeneral public

Harm types
ReputationalEconomic/PropertyPsychologicalPublic interestHuman or fundamental rightsPhysical (injury)

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Mercedes Milá carga contra TikTok por la difusión de un vídeo hecho con IA: "Es una manipulación peligrosa"

2025-08-02
MARCA
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating manipulated video and audio content (deepfake) impersonating a public figure to spread false health claims, which is a direct violation of rights and a form of harm to communities and individuals. The harm is realized as the video has been viewed over half a million times and promotes a dangerous false remedy. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and identity misuse. The platform's inadequate response further compounds the harm.
Thumbnail Image

Mercedes Milá denuncia suplantación de su identidad con inteligencia artificial: su dramático mensaje

2025-08-02
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create manipulated videos and voice reproductions of Mercedes Milá, which are then used to promote fraudulent health products. This constitutes a direct harm to individuals (potential victims of the scam) and harm to the community through misinformation and fraud. The misuse of AI-generated content to deceive and defraud people fits the definition of an AI Incident, as the AI system's use has directly led to harm. The article also references prior similar incidents causing financial harm, reinforcing the classification.
Thumbnail Image

No, Mercedes Milá no estuvo a punto de morir ni rompió su pasaporte español: el vídeo que ha causado revuelo en redes

2025-08-02
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate a fraudulent video (deepfake) that falsely attributes harmful health claims to a public figure, which constitutes misinformation and a violation of rights. The harm is realized as the video has been widely viewed and could mislead people regarding diabetes treatment, posing health risks and reputational damage. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

La desesperada súplica de Mercedes Milá, víctima de una suplantación con inteligencia artificial

2025-08-02
El Confidencial
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake video with Mercedes Milá's likeness and voice to promote a product with false health claims directly leads to harm by misleading consumers and potentially causing health risks. This constitutes a violation of rights and harm to communities through misinformation and fraudulent advertising. The AI system's use in this scam is central to the harm, making this an AI Incident.
Thumbnail Image

Mercedes Milá, víctima de un vídeo falso con IA que "promete" milagros contra la diabetes: "Es muy grave"

2025-08-01
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a deepfake video that manipulates a person's likeness and voice to promote a fraudulent health product. This misuse of AI has directly led to harm by misleading the public and potentially causing health and financial damage. The harm is realized as the video has millions of views and continues to circulate despite reporting. Therefore, this qualifies as an AI Incident due to direct harm to health and potential violation of rights through fraudulent use of AI-generated content.
Thumbnail Image

Mercedes Milá se harta de la IA y de las redes sociales y exige inmediatamente la retirada de un vídeo falso de TikTok

2025-08-01
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create a fake video that misrepresents a person and promotes a fraudulent product, which is causing harm to individuals (especially vulnerable groups) and violating rights related to personal image and advertising law. The harm is realized, not just potential, as the video is circulating and misleading people. The involvement of AI in generating the deepfake is explicit, and the harm includes violation of rights and harm to communities through fraud and misinformation. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Mercedes Milá alza la voz al encontrarse un vídeo falso suyo hecho con IA en TikTok: 'Es muy grave'

2025-08-02
El HuffPost
Why's our monitor labelling this an incident or hazard?
The event describes an AI-generated deepfake video misusing Mercedes Milá's image and voice to promote a false health product, which is a direct violation of rights and causes harm to public health and consumer trust. The harm is realized as the video has over half a million views and promotes misleading health claims, which can cause injury or harm to people. The AI system's role in generating manipulated content is pivotal to the incident. TikTok's refusal to remove the video despite the complaint further contributes to the harm. Hence, this is an AI Incident involving direct harm caused by AI misuse.
Thumbnail Image

Mercedes Milá denuncia el grave uso fraudulento de su imagen con inteligencia artificial

2025-08-03
FormulaTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate manipulated videos (deepfakes) that falsely represent a person endorsing a health product, which is a direct misuse of AI technology. The harm includes misinformation that can affect public health and the individual's rights to their image and reputation. The fraudulent AI-generated content has already been disseminated widely, causing realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse and the resulting health and rights violations.
Thumbnail Image

Mercedes Milá estalla contra TikTok por un vídeo falso creado con IA: 'Están usando mi imagen para vender productos'

2025-08-01
El Periódico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating manipulated video and audio content (deepfake) of a public figure without consent, used to promote a product fraudulently. This misuse has directly led to reputational harm and potential consumer harm, fitting the definition of an AI Incident under violations of rights and harm to communities. The platform's inadequate response further contributes to the harm. Hence, the classification as AI Incident is appropriate.