Deepfake Video Falsely Links Peso Pluma and Anitta

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video of Mexican singer Peso Pluma falsely depicts him saying he relied on Anitta after splitting from Nicki Nicole. The AI-manipulated footage, featuring synthesized audio and altered visuals, went viral on social media, creating public confusion and reputational risk, prompting calls to verify content before sharing.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fake video (deepfake) that falsely portrays Peso Pluma making statements he never made. This misinformation has led to public controversy and personal harm to the individual, which qualifies as harm to communities and individuals. Since the AI-generated content directly led to reputational harm and social disruption, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Peso Pluma en controversia amorosa luego de ser víctima de la IA | Noticias de México | El Imparcial

2024-08-19
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video (deepfake) that falsely portrays Peso Pluma making statements he never made. This misinformation has led to public controversy and personal harm to the individual, which qualifies as harm to communities and individuals. Since the AI-generated content directly led to reputational harm and social disruption, this event meets the criteria for an AI Incident.
Thumbnail Image

El video de Peso Pluma hecho con Inteligencia Artificial donde habla de su ruptura con Nicki Nicole

2024-08-19
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic audio and manipulate video content, constituting an AI system's use. The event involves the malicious use of AI to create false content (deepfake) that could mislead viewers. However, the article states that the false statements do not currently cause harm to the individuals' images or reputations. Since misinformation dissemination is occurring but no significant harm (such as reputational damage, violation of rights, or community harm) is reported, this does not meet the threshold for an AI Incident. It also does not describe a plausible future harm scenario beyond the current misinformation. Therefore, this event is best classified as Complementary Information, as it provides context and a warning about AI-generated deepfakes and advises caution in believing online content.
Thumbnail Image

Peso Pluma se vuelve víctima de la IA con video en donde habla de Anitta - El Diario NY

2024-08-21
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create manipulated video and images (deepfakes) that falsely represent a public figure, leading to misinformation and reputational harm. This constitutes a violation of rights (privacy and possibly defamation), which falls under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. Since the AI-generated content has already been disseminated and caused controversy, the harm is realized, making this an AI Incident rather than a hazard or complementary information.