
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated deepfake audios imitating political candidates have circulated widely in Mexico ahead of the 2024 elections, causing confusion, misinformation, and public distrust. Experts warn these audios are difficult to detect or verify, undermining the information ecosystem and potentially distorting voter decisions during a critical electoral period.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake audio that simulates political figures' voices. These AI-generated audios have already been disseminated, causing misinformation and public distrust, which are harms to communities and the information ecosystem. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm in the political and social context. The article also mentions ongoing and proposed regulatory and platform responses, but the primary focus is on the realized harm caused by AI-generated deepfakes in elections, not just on responses or potential future harm.[AI generated]