AI-Generated Deepfake Video Falsely Reports Death of Mexican Actress

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Mexico, AI-generated deepfake videos and voice cloning were used to falsely announce the death of actress Angelique Boyer, imitating her partner Sebastián Rulli. The incident caused widespread misinformation and emotional distress, highlighting the growing threat of AI-driven fraud and fake news in the country.[AI generated]

Why's our monitor labelling this an incident or hazard?

The video was created using advanced AI to clone a voice and manipulate visuals, which directly caused harm by spreading false information and emotional distress. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (misinformation and emotional harm).[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

VIDEO | ¿Murió Angelique Boyer? La verdad detrás del video viral que sacudió TikTok - El Imparcial de Oaxaca

2026-04-04
El Imparcial de Oaxaca
Why's our monitor labelling this an incident or hazard?
The video was created using advanced AI to clone a voice and manipulate visuals, which directly caused harm by spreading false information and emotional distress. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (misinformation and emotional harm).
Thumbnail Image

¿Murió Angelique Boyer? Esto sabemos sobre el video de Sebastián Rulli

2026-04-03
Excélsior
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the video uses AI-generated voice to create false content. The event stems from the use of AI to produce misleading information. However, the harm (misinformation and reputational damage) is potential and indirect, as the rumor is false and no direct harm has been reported. The article focuses on the existence and spread of AI-generated fake videos and the risks they pose, which could plausibly lead to harm such as misinformation and social disruption. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"El amor de mi vida se fue": La verdad sobre el video donde Sebastián Rulli anuncia la muerte de Angelique Boyer - El Heraldo de México

2026-04-03
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video that falsely reports a death, causing misinformation and emotional harm to the community. The AI's use directly led to the harm (spread of false news and public distress). Therefore, it meets the criteria for an AI Incident due to violation of informational integrity and harm to communities through misinformation.
Thumbnail Image

Sebastián Rulli conmueve con emotivo mensaje a Angelique Boyer después de los rumores de la muerte de la actriz - El Heraldo de México

2026-04-03
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a fake video falsely announcing the death of a public figure, which led to public concern and misinformation spread. This constitutes harm to communities by spreading false information, a recognized form of harm under the framework. Since the harm (public concern and misinformation) has already occurred due to the AI-generated content, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Falsa muerte de Angelique Boyer enciende alerta: deepfakes y clonación de voz ya impulsan fraudes en México

2026-04-04
Clic Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (deepfake technology and voice cloning AI) to create false audio and videos that have been used to deceive people, causing confusion and enabling fraud and extortion. These are direct harms to individuals and communities, including violations of rights and financial harm. The AI systems' use is central to these harms, meeting the criteria for an AI Incident. The article also provides data on the scale of these harms and official warnings, confirming that the harms are realized and ongoing, not just potential.