Deepfake video of Arturo Elías Ayub used in oil investment scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mexican entrepreneur Arturo Elías Ayub warned that a sophisticated AI-generated deepfake video impersonating him urged people to invest in a fraudulent oil scheme promising to double money in 24 hours. He cautioned the public not to fall for such AI-driven scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions that AI was used to create a video deepfake of Arturo Elías Ayub, which falsely invites people to invest in a fraudulent business. This misuse of AI has directly caused harm by deceiving individuals, constituting fraud and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to people (financial fraud) and harm to communities (misinformation and deception).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Energy, raw materials, and utilitiesFinancial and insurance servicesMedia, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Arturo Elías Ayub alertó sobre el uso de Inteligencia Artificial para cometer fraudes: "No caigan en esas transas"

2024-01-26
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to create a video deepfake of Arturo Elías Ayub, which falsely invites people to invest in a fraudulent business. This misuse of AI has directly caused harm by deceiving individuals, constituting fraud and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to people (financial fraud) and harm to communities (misinformation and deception).
Thumbnail Image

Arturo Elías Ayub fue víctima de la inteligencia artificial; le pasó como a Claudia Sheinbaum

2024-01-27
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create a video and audio impersonation of Arturo Elías Ayub to promote a fraudulent investment scheme. This led to actual financial harm to people who were scammed. The AI system's use directly contributed to the harm by enabling the creation of realistic fake content used in the scam. Therefore, this qualifies as an AI Incident due to realized harm (financial fraud) caused by AI-generated deepfake content.
Thumbnail Image

Video con inteligencia artificial, una transa: Arturo Elías Ayub

2024-01-25
Milenio.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a deepfake video impersonating a public figure to promote a fraudulent investment, which poses a direct risk of financial harm to individuals who might be deceived. This constitutes an AI Incident because the AI-generated content has directly led to a potential harm scenario (investment scam) and the victim is warning the public about it.