AI-Generated Deepfake Audios Fuel Election Uncertainty in Mexico

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake audios imitating political candidates have circulated widely in Mexico ahead of the 2024 elections, causing confusion, misinformation, and public distrust. Experts warn these audios are difficult to detect or verify, undermining the information ecosystem and potentially distorting voter decisions during a critical electoral period.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly used to create deepfake audio that simulates political figures' voices. These AI-generated audios have already been disseminated, causing misinformation and public distrust, which are harms to communities and the information ecosystem. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm in the political and social context. The article also mentions ongoing and proposed regulatory and platform responses, but the primary focus is on the realized harm caused by AI-generated deepfakes in elections, not just on responses or potential future harm.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Public interestReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Audios creados artificialmente siembran incertidumbre en las elecciones de México

2023-12-01
Gestión
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake audio that simulates political figures' voices. These AI-generated audios have already been disseminated, causing misinformation and public distrust, which are harms to communities and the information ecosystem. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm in the political and social context. The article also mentions ongoing and proposed regulatory and platform responses, but the primary focus is on the realized harm caused by AI-generated deepfakes in elections, not just on responses or potential future harm.
Thumbnail Image

Sí, buscarán que usted caiga. Se llaman "deepfakes" y llegaron a la política mexicana

2023-12-01
SinEmbargo MX
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake audio that impersonates political figures, which has already caused misinformation and public distrust during an ongoing election cycle. This meets the criteria for an AI Incident because the AI's use has directly led to harm to communities by contaminating the information environment and influencing political decisions. The article describes realized harm, not just potential risk, and discusses specific cases where deepfake audios have been circulated and believed by some, fulfilling the definition of an AI Incident.
Thumbnail Image

Audios creados a través de la inteligencia artificial siembran incertidumbre en las elecciones de México

2023-11-30
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake audios that impersonate political figures, which have been disseminated in the public domain during an election period. The harm is realized as these audios cause misinformation, distrust, and manipulation of the electoral process, which harms communities and violates rights to accurate information. The article provides concrete examples of such audios circulating and their impact, fulfilling the criteria for an AI Incident due to direct harm caused by AI-generated content in a critical societal context.
Thumbnail Image

Audios creados artificialmente siembran incertidumbre en las elecciones de México

2023-11-30
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake audio that impersonates political candidates, which is explicitly stated. The use of these AI-generated audios has directly led to misinformation and confusion among voters, which harms the information ecosystem and the democratic process, thus harming communities. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm. The article also mentions governance responses, but the primary focus is on the realized harm caused by AI-generated deepfakes in elections, not just on responses or potential risks.
Thumbnail Image

Audios creados con IA siembran incertidumbre en las elecciones del 2024 en México

2023-11-30
publimetro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate deepfake audio content, which is being used to mislead and distort political information during an election. This use of AI has directly led to harm in the form of misinformation and potential manipulation of voters, which constitutes harm to communities and the information ecosystem. The article describes actual circulation and impact of these AI-generated audios, not just potential risks, thus qualifying as an AI Incident. The discussion of regulatory and technological responses is complementary but secondary to the main focus on realized harm from AI misuse.
Thumbnail Image

Audios creados con IA causan incertidumbre previo a las elecciones en 2024

2023-11-30
Periódico AM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake audio that impersonates political candidates, which is directly causing harm by spreading misinformation and undermining public trust in elections. This harm to the information ecosystem and communities is a clear example of an AI Incident as defined. The involvement of AI is central and the harm is realized, not merely potential. The article also references responses such as legislative proposals and regulatory actions, but the primary focus is on the harm caused by the AI-generated content itself.