Claudia Sheinbaum warns of AI deepfake video used in investment scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Presidential candidate Claudia Sheinbaum has denounced an AI-generated deepfake video using her likeness and voice to solicit 4,000 peso investments with false promises of significant returns. The manipulated content is widely circulating on social media and messaging apps, prompting her to report the fraud and warn the public against the scheme.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to create a fake video impersonating Claudia Sheinbaum, which is being used to solicit money fraudulently. This constitutes a direct harm to people (financial fraud) and harm to communities (misinformation and deception). The AI system's role is pivotal as it enables the creation of realistic fake content used for fraudulent purposes. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingFinancial and insurance servicesGovernment, security, and defenceDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Claudia Sheinbaum denuncia suplantación de su identidad con IA

2024-01-25
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a fake video impersonating Claudia Sheinbaum, which is being used to solicit money fraudulently. This constitutes a direct harm to people (financial fraud) and harm to communities (misinformation and deception). The AI system's role is pivotal as it enables the creation of realistic fake content used for fraudulent purposes. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

¡Es falso! Sheinbaum alerta por un video creado con inteligencia artificial en el que piden dinero

2024-01-25
Excélsior
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI-generated video falsely uses the voice and image of Claudia Sheinbaum to solicit money, which is a form of fraud and misinformation. This misuse of AI technology has already caused harm by deceiving people, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The harm is realized, not just potential, as the fraudulent video is circulating and prompting warnings and possible legal action.
Thumbnail Image

Ojo con el video falso donde Sheinbaum pide invertir en una plataforma financiera

2024-01-25
Sopitas.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a deepfake video that directly facilitates a financial scam, which is a form of harm to people and communities. The AI system's use in generating the fake video and audio is central to the incident, as it enables the fraud to be more convincing and widespread. This meets the definition of an AI Incident because the AI system's use has directly led to harm (fraud and potential financial loss).
Thumbnail Image

Alerta Sheinbaum video falso para extorsionar

2024-01-25
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to create a fake video of a public figure used to extort money from people. This AI-generated content is causing direct harm by misleading and defrauding individuals, fulfilling the criteria for an AI Incident due to harm to communities and individuals. The candidate's warnings and reports to social media platforms confirm the harm is occurring, not just a potential risk.
Thumbnail Image

Extorsión con video falso: alerta Sheinbaum

2024-01-25
sipse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI-manipulated video of Claudia Sheinbaum is circulating, used to extort money by promising false financial gains. This is a direct harm caused by the use of an AI system to create deceptive content leading to financial fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm (extortion and fraud).
Thumbnail Image

Claudia Sheinbaum es víctima de la inteligencia artificial con este fraude de 4 mil pesos - Canal 44

2024-01-25
Canal 44
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI is being abused to create fraudulent videos using Claudia Sheinbaum's image and voice to solicit money fraudulently. This misuse of AI has directly led to financial harm to victims of the scam, fulfilling the criteria of an AI Incident due to harm to people (financial injury) caused by the AI-generated content. The harm is realized, not just potential, as the fraud is actively occurring. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Sheinbaum denuncia el uso de su imagen y voz en un video creado con inteligencia artificial - ElPeriodicoDeMexico.Com

2024-01-25
Periódico de México
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video and audio impersonating a political figure to promote a fraudulent financial platform. The harm is realized as the manipulated content is circulating widely, misleading people and potentially causing financial harm and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violation of rights. The event is not merely a potential hazard or complementary information but a concrete incident of AI misuse causing harm.
Thumbnail Image

Claudia Sheinbaum es víctima de la inteligencia artificial con este fraude de 4 mil pesos

2024-01-25
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos and audio impersonating public figures to commit fraud, which has directly caused harm to people by misleading them into financial scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial fraud and deception). The article explicitly states that these fraudulent videos are circulating and causing harm, not just a potential risk. Therefore, the classification is AI Incident.
Thumbnail Image

Denuncia Sheinbaum video con IA usado para defraudar

2024-01-24
La Silla Rota
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI-generated video using the candidate's image and voice is being used to defraud people by asking for money with false promises of financial gain. The AI system's use directly led to harm (financial fraud), fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violation of rights and harm to individuals. Hence, the classification is AI Incident.