AI Deepfake Scam Misuses Alejandro Fantino’s Identity

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Alejandro Fantino reported that an AI-generated deepfake video using his cloned voice and image was used to promote a fraudulent investment scheme on social media. The incident has raised concerns over the misuse of AI for identity fraud and violation of his rights, alerting his followers about the deceptive practice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to generate a synthetic video and voice of Alejandro Fantino to promote a scam. This misuse of AI directly leads to harm by deceiving people, which can cause financial loss and damage to individuals. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI-generated content used maliciously.[AI generated]
AI principles
Transparency & explainabilityPrivacy & data governanceRespect of human rightsAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketingFinancial and insurance servicesDigital security

Affected stakeholders
OtherGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Usaron la voz de Alejandro Fantino para promover una estafa en las redes sociales

2025-04-10
Los Andes
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a synthetic video and voice of Alejandro Fantino to promote a scam. This misuse of AI directly leads to harm by deceiving people, which can cause financial loss and damage to individuals. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI-generated content used maliciously.
Thumbnail Image

Fantino denunció que clonaron su voz para una estafa

2025-04-13
La Banda Diario
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice and image cloning (deepfake technology) to create manipulated content that was used in a scam. This misuse of AI directly caused harm by misleading the public and damaging the victim's reputation, fitting the definition of an AI Incident due to violations of rights and harm to communities through fraud.
Thumbnail Image

Alejandro Fantino denuncia el uso fraudulento de su voz e imagen mediante inteligencia artificial

2025-04-13
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems for voice and face cloning (deepfake technology) to create misleading content that has been disseminated widely, causing harm by promoting fraudulent investments. This constitutes an AI Incident because the AI system's use directly led to harm (economic fraud and identity misuse). The harm is realized, not just potential, and involves violations of rights and harm to communities through deception.
Thumbnail Image

Alejandro Fantino denunció que clonaron su voz con inteligencia artificial para una presunta estafa - MDZ Online

2025-04-13
MdzOnline
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based voice and image cloning (deepfake technology) to create a fraudulent video impersonating Alejandro Fantino. This AI misuse has directly caused harm by enabling a scam that could deceive and financially harm individuals, as well as damage the reputation of the person impersonated. Therefore, it meets the criteria for an AI Incident due to realized harm from malicious AI use.
Thumbnail Image

Fantino advirtió que clonaron su imagen y voz con inteligencia artificial para realizar una estafa

2025-04-14
La Nueva
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice and image cloning (deepfake technology) to create manipulated content that directly facilitates a financial scam. This misuse of AI has directly led to harm by deceiving the public and potentially causing financial losses, as well as damaging the trust in media and public figures. Therefore, it meets the criteria for an AI Incident due to realized harm caused by the AI system's malicious use.