AI-Generated Deepfake Video Impersonates Carlos Slim in Investment Scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video using AI to manipulate the image and voice of businessman Carlos Slim circulated on social media, falsely promoting a fraudulent investment app. The Mexican financial authority Condusef warned the public about the scam, highlighting the risks of AI-enabled deception and financial fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to create fraudulent videos that mislead people into investment scams constitutes an AI Incident because the AI system's use has directly led to harm (financial fraud) to individuals. The alert from Condusef confirms the presence of realized harm through AI-enabled deception.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Financial and insurance servicesMedia, social platforms, and marketingDigital security

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Alerta Condusef de uso de IA para efectuar fraudes por medio de videos

2023-11-25
Quadratín Michoacán
Why's our monitor labelling this an incident or hazard?
The use of AI to create fraudulent videos that mislead people into investment scams constitutes an AI Incident because the AI system's use has directly led to harm (financial fraud) to individuals. The alert from Condusef confirms the presence of realized harm through AI-enabled deception.
Thumbnail Image

Alertan sobre video de Slim manipulado con IA

2023-11-24
Horacero
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a manipulated video (deepfake) that misrepresents a public figure to promote a scam. The use of AI-generated content directly leads to harm by misleading individuals into potentially losing money or exposing personal information. Therefore, this constitutes an AI Incident due to realized harm from the malicious use of AI-generated manipulated media causing harm to individuals and communities.
Thumbnail Image

IA suplantó a Carlos Slim para invitar a invertir en aplicación fraudulenta, alerta Condusef

2023-11-24
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI techniques, specifically deepfake technology using generative adversarial networks, were used to create a fake video and audio of Carlos Slim promoting a fraudulent investment app. This AI-generated content is being used to deceive people into investing money in a scam, which constitutes harm to individuals (financial harm) and a violation of rights. The AI system's use is directly linked to the fraudulent activity and resulting harm, qualifying this as an AI Incident under the framework.
Thumbnail Image

¿Carlos Slim te invita a invertir? Alertan sobre video del empresario manipulado con IA

2023-11-24
El Universal
Why's our monitor labelling this an incident or hazard?
The video involves AI-generated manipulation (deepfake) of a public figure to promote a scam, which can directly lead to financial harm to victims. The AI system's use in creating the fake video is central to the incident, as it enables the deception. Therefore, this event meets the criteria of an AI Incident due to realized harm (fraud and potential financial loss) caused by AI misuse.
Thumbnail Image

Condusef alerta por video donde Carlos Slim invita a invertir

2023-11-25
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a deepfake video that misleads people into potentially harmful financial actions, constituting a violation of rights and harm to individuals (harm to communities). Although the harm is not yet realized in the article, the AI-generated deepfake video directly facilitates a scam that could cause financial harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a harmful event (fraud attempt) or at least a clear and present risk of harm to people.
Thumbnail Image

Nueva plataforma de inversiones de Slim es falsa; utilizan IA para defraudar

2023-11-27
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to manipulate video and audio to create a false message from Carlos Slim promoting a fraudulent investment platform. This manipulation has directly led to financial fraud and extortion, which are harms to individuals and communities. The AI system's role is pivotal in enabling the deception. Therefore, this qualifies as an AI Incident under the definition of harm caused by the use of AI systems.
Thumbnail Image

Condusef alerta sobre fraude de app que suplantó a Carlos Slim

2023-11-25
Medio Tiempo
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake videos impersonating Carlos Slim and Ricardo Salinas to promote fraudulent investment apps. This use of AI directly causes harm by misleading people into scams, fulfilling the criteria for an AI Incident due to realized harm from AI misuse in fraud and impersonation.
Thumbnail Image

¡Cuidado! La Condusef alerta por fraude en un video de Carlos Slim manipulado con IA

2023-11-25
SinEmbargo MX
Why's our monitor labelling this an incident or hazard?
The video was explicitly identified as manipulated using AI to alter the voice and face of Carlos Slim, which is a clear use of AI-generated deepfake technology. The manipulated video is used to promote a fraudulent investment scheme, which can cause financial harm to people who trust the video and act on it. This constitutes harm to individuals and communities through deception and potential financial loss. Since the harm is occurring and the AI system's use is central to the incident, this is classified as an AI Incident.
Thumbnail Image

Condusef emite alerta: Suplantan a Carlos Slim con IA para estafar a inversores

2023-12-09
sipse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create a fake video of Carlos Slim to defraud investors, which constitutes direct harm to individuals (financial harm and potential privacy violations). The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident due to realized harm caused by AI-generated impersonation for fraudulent purposes.
Thumbnail Image

Carlos Slim no busca 'socios': Suplantan rostro y voz del empresario con IA para ESTAFAR a inversores

2023-12-08
El Financiero
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to manipulate the voice and face of Carlos Slim to produce a fraudulent video aimed at scamming investors. This constitutes direct harm to people (financial harm to victims of the scam) caused by the malicious use of an AI system. Therefore, it qualifies as an AI Incident under the definition of harm to people through misuse of AI-generated content.
Thumbnail Image

Carlos Slim no busca 'socios': Suplantan rostro y voz del empresario con IA para estafar a inversores | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2023-12-09
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to manipulate voice and facial imagery to create a deceptive video. This AI-generated deepfake is used to perpetrate fraud, causing direct harm to people by tricking them into providing personal information and money. Therefore, it meets the criteria of an AI Incident because the AI system's use directly leads to harm (financial and personal data loss) to individuals.
Thumbnail Image

Suplantan rostro y voz de Carlos Slim con IA y ESTAFAN a inversores

2023-12-09
DineroenImagen
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate a video that impersonates Carlos Slim's face and voice, which is then used to defraud investors. This involves the use of an AI system (deepfake technology) in the malicious use phase, directly leading to harm (financial loss to investors). The harm is realized, not just potential, and the AI system's role is pivotal in enabling the fraud. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Condusef alerta por uso de voz e imagen de Slim con IA para estafar

2023-12-09
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create a manipulated video (deepfake) of Carlos Slim to perpetrate a scam. The AI system's misuse directly leads to harm by enabling fraud and financial loss to victims. Therefore, this qualifies as an AI Incident because the AI-generated content is central to the harm occurring (fraud and deception).
Thumbnail Image

Suplantan rostro y voz de Carlos Slim con IA para estafar a inversores

2023-12-09
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to manipulate voice and face, which qualifies as an AI system. The AI-generated deepfake video was used maliciously to defraud people, leading to realized harm (financial loss and deception). This fits the definition of an AI Incident as the AI system's use directly led to harm to people (fraud victims). The article also includes warnings and recommendations to avoid harm, but the core event is the fraudulent use of AI-generated content causing harm.