AI-Generated Deepfake Falsely Reports Death of Actress Adela Noriega

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely announced the death of Mexican actress Adela Noriega, using manipulated footage of TV host Lili Estefan. The video spread rapidly on social media, causing confusion and concern among fans, and required official denials to counter the AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI to create manipulated media (deepfake-like content) that falsely reports a death, which is a form of misinformation causing harm to the community by spreading false information and potentially damaging reputations. Since the AI-generated content has already been disseminated and caused confusion and concern, this constitutes an AI Incident due to harm to communities through misinformation. The AI system's use in generating and spreading false news directly led to this harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
General public

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

¿Adela Noriega murió? Noticia de la actriz está alarmando a sus fanáticos en México

2025-06-12
PULZO
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create manipulated media (deepfake-like content) that falsely reports a death, which is a form of misinformation causing harm to the community by spreading false information and potentially damaging reputations. Since the AI-generated content has already been disseminated and caused confusion and concern, this constitutes an AI Incident due to harm to communities through misinformation. The AI system's use in generating and spreading false news directly led to this harm.
Thumbnail Image

¿Falleció Adela Noriega de cáncer a los 55 años?; esto se sabe

2025-06-12
El Mañana de Nuevo Laredo
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI-generated deepfake content to spread misinformation about a public figure's death. While no actual harm to the individual is reported, the event involves AI systems used to create deceptive media that could plausibly lead to harm such as reputational damage or public misinformation. However, since no direct or realized harm has occurred yet, and the main focus is on the phenomenon of AI-enabled misinformation, this fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

La última foto publicada supuestamente por Adela Noriega y de la que dicen fue hecha con Inteligencia Artifical - El Heraldo de México

2025-06-12
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated or AI-edited content (photo and video) that falsely claims the death of a public figure, which is a form of misinformation causing harm to communities. The AI system's use in creating and spreading this fake news has directly led to harm through misinformation and social disruption. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La última entrevista de Adela Noriega con Cristina en 2003 está completa en YouTube; habló de su retiro

2025-06-13
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated false news (misinformation) about a celebrity's death, which is an AI system's output causing misinformation. While misinformation can harm communities, the article does not indicate that this misinformation caused direct or significant harm such as panic, rights violations, or other serious consequences. Therefore, it does not meet the threshold for an AI Incident. It is more accurately classified as Complementary Information because it provides context about the AI-generated false news and its impact on public memory and discourse, without describing a concrete incident of harm or a plausible future hazard.
Thumbnail Image

¿Adela Noriega murió? La inteligencia artificial la 'mató' y muchos creyeron su falsa muerte

2025-06-12
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create fabricated video content (deepfake) that falsely claims the death of a public figure. This misinformation has caused confusion and emotional impact on the community, constituting harm to communities through the spread of false information. Since the AI-generated content directly led to this harm, this qualifies as an AI Incident under the definition of harm to communities caused by AI-generated misinformation.
Thumbnail Image

¿Murió la actriz Adela Noriega de cáncer? ¡Esto se sabe!

2025-06-12
Vanguardia
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video that spread false information about a celebrity's death. This constitutes the use of AI to create misleading content. However, the article does not report any realized harm such as injury, rights violations, or significant disruption resulting from this misinformation. The event is primarily about the spread of AI-generated fake news, which is a recognized risk but here remains at the level of misinformation without confirmed direct harm. Therefore, it is best classified as Complementary Information, as it provides context on AI misuse and misinformation risks without a confirmed AI Incident or AI Hazard occurring.
Thumbnail Image

¿Murió Adela Noriega? La verdad sobre el reporte viral

2025-06-13
Diario El Telégrafo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a deepfake video that falsely reported a death, which is a clear case of AI-generated misinformation causing harm to the community and individuals' reputations. The misinformation spread widely and required official denials, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through misinformation dissemination.
Thumbnail Image

¿Es verdad que murió Adela Noriega? Esto sabemos sobre el rumor que circula en redes sociales

2025-06-12
Periodico Correo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (manipulated video created with AI) that led to the spread of false information causing social concern. This constitutes harm to communities by spreading misinformation and causing emotional distress to fans. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident under the definition of harm to communities due to misinformation dissemination.
Thumbnail Image

¿Realmente murió Adela Noriega; un VIDEO en internet causa confusión

2025-06-12
Diario de Morelos
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a manipulated video (deepfake) that falsely reported a death, which is a form of misinformation. However, the harm is limited to confusion and concern without evidence of direct injury, rights violations, or other significant harms. Since the harm is potential and social in nature, and no actual incident of harm has occurred, this qualifies as an AI Hazard due to the plausible risk of misinformation causing harm if such content spreads widely or is believed. The article focuses on the false video and its spread, not on a realized harm incident or a response to a past incident, so it is not Complementary Information or an AI Incident.
Thumbnail Image

Martha Figueroa desmiente la muerte de Adela Noriega y revela su paradero | Noticias de México | El Imparcial

2025-06-13
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a video apparently generated by AI spreading false information about a person's death. This misinformation could harm the reputation and cause distress, which is a form of harm to communities or individuals. However, the article focuses on debunking the false claim and clarifying the truth, with no indication that the misinformation caused significant or direct harm. Therefore, this is best classified as Complementary Information, as it provides context and response to a prior AI-generated misinformation event rather than reporting a new AI Incident or Hazard.
Thumbnail Image

¿En Florida, en México o muerta? Las teorías sobre el paradero de Adela Noriega

2025-06-13
Listin diario
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a fabricated video with a synthetic voice and image, spreading false information about a person's death. This constitutes a violation of rights and causes harm to the community by spreading misinformation. Since the AI-generated falsehood has already been disseminated and caused harm, this qualifies as an AI Incident under the framework, specifically under harm to communities and violation of rights.
Thumbnail Image

¿Por qué Adela Noriega es tendencia en redes sociales? todo lo que se sabe de la actriz mexicana

2025-06-13
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to create a fabricated video (a deepfake) that falsely reports the death of Adela Noriega. This misinformation spread widely on social media, causing confusion and emotional impact. The AI system's use in generating this false content directly led to harm in the form of misinformation and social disruption, fitting the definition of an AI Incident involving harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

¡Está viva! Revelan paradero de Adela Noriega tras rumores de su muerte

2025-06-13
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a fake video that spread false information about a person's death. However, the article does not describe any actual harm caused by this AI-generated content, nor does it indicate any direct or indirect injury, violation of rights, or other significant harm resulting from the AI system's use. The main focus is on debunking the misinformation and clarifying the truth, which constitutes complementary information about AI misuse and its societal impact rather than a new incident or hazard.
Thumbnail Image

La verdad detrás de la supuesta muerte de Adela Noriega | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-06-13
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate a manipulated video falsely announcing the death of a public figure. This use of AI directly leads to harm to communities by spreading misinformation and potentially causing emotional distress or reputational damage. Therefore, it qualifies as an AI Incident due to the realized harm from AI-generated disinformation.
Thumbnail Image

Adela Noriega está viva; revelan dónde está y cómo luce en la actualidad (VÍDEO)

2025-06-14
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a fake video that spread false information about Adela Noriega's death, which constitutes harm to the community through misinformation. Since the AI-generated content caused actual misinformation and public concern, this qualifies as an AI Incident. The article's main focus is on the false AI-generated video and its impact, not just on the response or complementary information. Therefore, this event is classified as an AI Incident due to the realized harm from AI-generated misinformation.
Thumbnail Image

Desmienten muerte de la actriz Adela Noriega tras video realizado con IA

2025-06-13
www.eluniversal.com.co
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a deepfake video, which is a clear example of AI-generated content causing misinformation. While misinformation can harm communities, the article does not indicate that this misinformation led to significant or materialized harm such as injury, rights violations, or disruption. The harm is potential or reputational but not clearly articulated as a realized AI Incident. Therefore, this event is best classified as an AI Incident because the AI-generated video directly caused misinformation and public alarm, which is a form of harm to communities. The AI system's use directly led to the spread of false information, fulfilling the criteria for an AI Incident.
Thumbnail Image

El último mensaje de Adela Noriega en sus redes; no se sabe nada de ella

2025-06-13
El Mañana de Nuevo Laredo
Why's our monitor labelling this an incident or hazard?
The article describes a false video generated with AI that spread misinformation about a person's death. The AI system's use directly led to harm in the form of misinformation and potential emotional distress to the public and the individual's community. This constitutes harm to communities and individuals through false information dissemination, fitting the definition of an AI Incident.
Thumbnail Image

¡Qué de valooor..!: Volvieron a matar en redes a la actriz Adela Noriega

2025-06-13
Noticia al Dia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the video spreading false news about Adela Noriega's death was created using AI. This AI-generated misinformation has led to harm in the form of false information spreading on social media, which can be considered harm to communities and reputational harm to the individual. Since the AI system's use directly led to the dissemination of false and harmful content, this qualifies as an AI Incident under the framework.
Thumbnail Image

Aclaran que Adela Noriega está viva tras difusión de video falso generado con IA | Noticias de Norte de Santander, Colombia y el mundo

2025-06-13
Noticias de Norte de Santander, Colombia y el mundo
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions that the video was created using artificial intelligence, which led to the spread of false information about a person's death. This misinformation caused harm to the community by generating alarm and confusion among the public and fans. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident under the definition of harm to communities through misinformation dissemination.
Thumbnail Image

Circula noticia del 'fallecimiento' de Adela Noriega

2025-06-14
Mi Diario
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate manipulated videos and fake posts that spread false information about a person's death. This constitutes harm to the community and individuals through misinformation and impersonation, which can be considered harm to communities or violation of rights (e.g., right to truthful information and protection from defamation). Since the misinformation is actively spreading and causing reputational damage, this qualifies as an AI Incident. The AI system's use directly led to the harm of misinformation dissemination and reputational harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El presidente de la ANDI, José Elías Moreno, revela qué sabe de la 'muerte' de Adela Noriega

2025-06-16
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated misinformation about a celebrity's death, but the article primarily addresses the clarification and denial of this false news. There is no evidence of injury, rights violations, or other significant harms caused by the AI-generated content. Therefore, this is not an AI Incident. Since no plausible future harm or risk is discussed beyond the misinformation event itself, it does not qualify as an AI Hazard. The article serves to provide context and clarification about the misinformation, which fits the definition of Complementary Information as it supports understanding of AI's role in misinformation without reporting new harm or risk.