AI-Generated Deepfake Videos of Mexican President Used in Pemex Investment Scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos and audio impersonating President Andrés Manuel López Obrador have been circulating online, promoting fraudulent Pemex investment schemes. The Mexican government and financial authorities have warned the public about these scams, highlighting the risks of AI-driven misinformation and the financial harm caused by such deceptive content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated fake videos (deepfakes) of the president spreading false information and scams, which have already caused harm by deceiving a large portion of the population and enabling fraudulent activities. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities and individuals through misinformation and fraud.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomyPrivacy & data governanceHuman wellbeingRespect of human rights

Industries
Government, security, and defenceFinancial and insurance servicesMedia, social platforms, and marketingDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalPublic interestPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AMLO robot: López Obrador advierte de guerra sucia con IA en su contra

2024-04-26
El Mañana de Nuevo Laredo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos (deepfakes) of the president spreading false information and scams, which have already caused harm by deceiving a large portion of the population and enabling fraudulent activities. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities and individuals through misinformation and fraud.
Thumbnail Image

AMLO alerta sobre fraude con IA: exhibió falsa propuesta de inversión

2024-04-26
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a simulated voice and image to deceive people into a fraudulent investment, which is a direct harm caused by the AI system's misuse. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud) to individuals and society. The event involves the use of AI-generated content for malicious purposes, causing realized harm rather than just a potential risk.
Thumbnail Image

AMLO mostró un video de "deep fake" en su Conferencia Mañanera; esto fue lo que pasó

2024-04-26
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake video generated by AI) that could plausibly lead to harm such as financial fraud or misinformation. However, the article does not report that any harm has actually occurred yet; it mainly discusses the potential risks and the need for public awareness and caution. Therefore, this qualifies as an AI Hazard because the AI-generated deepfake video could plausibly lead to harm, but no direct harm is reported at this time.
Thumbnail Image

VIDEO AMLO llama a tener "mucho ojo" por uso de su imagen con IA para cometer fraudes

2024-04-26
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and audio being used to commit fraud and deceive the public, which is a direct harm to people (harm to communities and individuals). The AI system's use in generating false images and voices is central to the harm described. The harm is realized as fraudulent videos are actively spreading and causing deception. Hence, this meets the definition of an AI Incident due to direct harm caused by AI misuse.
Thumbnail Image

Alerta AMLO de estafas e información falsa de Pemex generada por IA

2024-04-26
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated false content (a video of the president inviting investment in Pemex) being used to perpetrate scams. This involves the use of AI systems to create misleading content that directly harms people by enabling fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and financial scams.
Thumbnail Image

AMLO advierte sobre estafas con el uso de Inteligencia Artificial

2024-04-26
DEBATE
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic audio impersonating the president's voice, which was then used in a scam. This directly leads to harm by deceiving people (harm to communities and individuals through fraud). The president's warning highlights the ongoing and growing risk of such AI-generated disinformation, especially in the political context. Since the harm is occurring (fraud attempts) and AI is pivotal in enabling this, this qualifies as an AI Incident.
Thumbnail Image

AMLO alerta sobre estafa con IA utilizando su imagen para invertir en Pemex: ¿Cuál es la inversión mínima que piden los estafadores?

2024-04-26
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake videos and audio of public figures to promote fraudulent investment opportunities, which constitutes a direct harm to individuals through financial scams. The AI system's misuse in generating deceptive content is central to the incident, fulfilling the criteria for an AI Incident due to harm to property and communities. The event involves the use and misuse of AI systems to cause harm, not just a potential risk or complementary information.
Thumbnail Image

AMLO advierte estafas con IA utilizando su imagen; 70% de los mexicanos no comprenden que es falso

2024-04-26
Aristegui Noticias
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video of the president, which is being used to deceive people into scams. This constitutes a direct AI Incident because the AI-generated content is actively causing harm by facilitating fraud attempts. The harm is realized (scams are occurring or being attempted), and the AI system's role is pivotal in creating the deceptive content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

'Mucho ojo', AMLO advierte por estafa con video con su imagen y voz, para invertir en Pemex

2024-04-26
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a manipulated video and audio of the president, which is being used to scam people into investing in a fraudulent scheme. This constitutes direct harm to individuals (financial harm) and harm to communities through misinformation and fraud. The AI system's role in generating the fake video and audio is pivotal to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI-generated content has directly led to harm.
Thumbnail Image

AMLO alerta que el video que circula con su imagen y voz para invertir en Pemex es un fraude

2024-04-26
Revista Proceso
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create a deepfake video and audio impersonating the president, which is then used to defraud people by promoting a fake investment opportunity. This is a direct harm caused by the malicious use of an AI system, fulfilling the criteria for an AI Incident due to realized harm (fraud and deception).
Thumbnail Image

Alerta AMLO sobre uso fraudulento de IA

2024-04-26
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to create fraudulent audio and video content that is misleading the public during an election period. This use of AI has directly led to harm in the form of misinformation and deception affecting political processes and public trust, which constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm, not just a potential or future risk.
Thumbnail Image

Advierte AMLO sobre fraudes con Inteligencia Artificial - Quadratín

2024-04-26
Quadratin Jalisco
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the fraudulent video and audio were generated using AI technology (deepfake). The event involves the use of AI to create false content that could directly lead to harm (financial fraud) to people. Although the harm is not explicitly stated as having occurred yet, the warning implies a credible risk of fraud and harm to individuals. Therefore, this qualifies as an AI Hazard because the AI-generated content could plausibly lead to an AI Incident (fraud causing financial harm). There is no indication that harm has already occurred, so it is not an AI Incident. The main focus is the warning about potential fraud, not a response or update, so it is not Complementary Information. It is related to AI and plausible harm, so it is not Unrelated.
Thumbnail Image

Tenemos que evitar fraudes; 70% de mexicanos no conocen el IA: AMLO

2024-04-26
Tiempo
Why's our monitor labelling this an incident or hazard?
The article highlights the plausible future harm from AI-generated fake content leading to fraud and deception, but does not report any actual harm or incident occurring yet. The AI system's involvement is in the potential creation of realistic fake media that could mislead people. Since no realized harm is described, but a credible risk is emphasized, this qualifies as an AI Hazard rather than an AI Incident. It is not merely general AI news because the focus is on the risk of harm and the need for mitigation.
Thumbnail Image

AMLO alerta sobre uso fraudulento de la IA en la política

2024-04-26
sipse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fraudulent audio and video content that misleads the public and affects political candidates. The misinformation is actively circulating and causing harm by deceiving people, as noted by the president's warning and the examples given. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and political manipulation.
Thumbnail Image

Invierte en Pemex y ganarás mucho dinero, el video falso que usan de AMLO y alertan de fraudes

2024-04-26
Periódico AM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake videos and audio of the President, which are used to scam people into fraudulent investments. This involves an AI system's use leading directly to harm (financial fraud). The harm is realized, not just potential, as people are being defrauded. Therefore, this qualifies as an AI Incident under the framework, specifically harm to people through fraud enabled by AI-generated deepfakes.
Thumbnail Image

Denuncian falso video hecho con IA invitando a invertir en Pemex

2024-04-27
Diario Basta!
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create manipulated video and audio impersonating a public figure to promote a fake investment scheme, which constitutes a direct harm (fraud) to people. The harm is realized as the video is circulating and has prompted official denials and warnings. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals through deception and potential financial loss.
Thumbnail Image

AMLO advierte estafas con IA utilizando su imagen; 70% de los mexicanos no comprenden que es falso

2024-04-26
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the event centers on AI-generated synthetic media (images and videos) that are used to create false content impersonating the president. Although no specific harm has yet been reported as occurring, the president explicitly warns about the risk of fraud and scams using such AI-generated content. This constitutes a plausible risk of harm (financial fraud, misinformation) caused by AI systems. Therefore, this event is best classified as an AI Hazard, since it highlights the credible potential for AI-generated content to cause harm, but does not report an actual incident of harm yet.
Thumbnail Image

AMLO alerta de FRAUDE en VIDEO con su imagen que invita a invertir en Pemex

2024-04-26
Diario Puntual
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI that impersonates the president to promote a fraudulent investment scheme. This misuse of AI technology has directly led to harm by deceiving people and potentially causing financial losses. The president's warning and the description of the video as a scam confirm that harm is occurring. Therefore, this qualifies as an AI Incident because the AI system's malicious use has directly caused harm to people and communities.