Deepfake scam uses AI to impersonate Princess Leonor and defraud Latin Americans

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals use AI-generated deepfake videos and audio of Princess Leonor on TikTok and Facebook to promise massive money transfers. Victims, mainly in Latin America, are lured into paying fees (€100–200) under pretexts of taxes or transfer costs. The sophisticated AI impersonation has defrauded dozens, causing significant financial losses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to create realistic fake videos and audios (deepfakes) that are used maliciously to defraud people, causing direct financial harm (over 800 euros per victim) and violating rights related to image and identity. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident due to realized harm (financial loss) and violation of rights (image misuse).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Se hacen pasar por la Princesa Leonor para robarte más de 800 euros: la estafa que recorre TikTok y otras redes sociales

2024-12-04
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create realistic fake videos and audios (deepfakes) that are used maliciously to defraud people, causing direct financial harm (over 800 euros per victim) and violating rights related to image and identity. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident due to realized harm (financial loss) and violation of rights (image misuse).
Thumbnail Image

Alertan de una estafa en redes sociales con la princesa Leonor: se hacen pasar por ella usando inteligencia artificial

2024-12-04
Faro de Vigo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create realistic fake videos impersonating a public figure, which directly leads to financial harm to victims. The AI system's use in generating deceptive content is a key factor in the scam's success, causing harm to individuals. Therefore, this qualifies as an AI Incident due to realized harm (financial loss) caused by AI-enabled impersonation and fraud.
Thumbnail Image

La estafa de la supuesta Princesa Leonor: "Si ingresaba 500 euros me daban el doble. Me confié"

2024-12-04
Antena3
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning to impersonate a public figure, which directly leads to financial harm (a form of harm to persons) through fraudulent schemes. The AI system's use is central to the scam's effectiveness, making it an AI Incident as the AI's development and use have directly led to realized harm (financial loss) to victims.
Thumbnail Image

Se hicieron pasar por la princesa Leonor y le estafaron: "Me dijo que iba a ganar 100.000 dólares"

2024-12-03
telecinco
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a voice and face impersonating a public figure, which was used to deceive victims into a financial scam. This involves an AI system's use leading directly to harm (financial loss) and exploitation of vulnerable individuals. Therefore, it meets the criteria for an AI Incident as the AI system's use directly led to harm to persons (financial and psychological harm) and violation of rights.
Thumbnail Image

"Creí que hablaba con Leonor y ahora estoy endeudada": así suplantan a la Princesa de Asturias para realizar estafas en Latinoamérica

2024-12-03
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create fake videos and profiles impersonating a public figure to deceive and defraud people. The scam has directly caused financial harm to victims, fulfilling the criteria for an AI Incident under harm to persons (financial injury and debt). The AI system's use in generating realistic fake content is pivotal to the scam's success and harm. The article documents realized harm, not just potential harm, and the AI system's role is central to the incident.
Thumbnail Image

El timo del "tesoro escondido": el origen de la estafa de la falsa Leonor tiene más de 200 años

2024-12-04
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated content to impersonate a public figure and deceive people into financial loss, which constitutes harm to individuals. The AI system's use in generating fake personas and messages directly contributes to the scam's effectiveness and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-enabled deception.
Thumbnail Image

Perfiles falsos de la princesa Leonor estafan en Latinoamérica con promesas económicas

2024-12-04
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create false images of Princess Leonor, which are used to lure victims into scams promising economic aid. This use of AI directly facilitates harm to people by enabling fraudulent schemes that cause financial loss. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial fraud) to individuals.
Thumbnail Image

Suplantan la identidad de la princesa Leonor para una estafa

2024-12-04
okdiario.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used to create or animate fake profiles of Princess Leonor to scam people. The scam involves direct financial harm to victims who are deceived into paying money. The AI system's role is pivotal in enabling the impersonation and interaction that leads to the harm. Hence, this is an AI Incident due to realized harm caused by the AI system's use in fraudulent activity.
Thumbnail Image

La princesa Leonor, el nuevo gancho para estafas a través de redes sociales en Latinoamérica

2024-12-03
LaSexta
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating videos that impersonate a public figure to commit fraud, leading to direct financial harm to individuals. The AI-generated content is central to the scam's operation, fulfilling the criteria for an AI Incident due to realized harm (financial loss) caused by the AI system's outputs. Although TikTok has removed the videos and no cases have been detected in Spain, the harm to Latin American victims is ongoing, confirming this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perfiles falsos de la princesa Leonor estafan en Latinoamérica con promesas económicas

2024-12-04
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfake images to impersonate a public figure and deceive victims constitutes the involvement of an AI system. The scam has directly led to harm by misleading people and attempting to extract money from them, fulfilling the criteria of an AI Incident under harm to persons (a) and harm to communities (d). The harm is realized as victims have been targeted and some have lost money or were close to losing money. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Perfiles falsos de la princesa Leonor estafan en Latinoamérica

2024-12-04
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images of a public figure, which are then used to perpetrate a scam causing direct financial harm to victims. The AI-generated content is pivotal in deceiving victims and enabling the fraud. The harm (financial loss) has already occurred or is ongoing, fulfilling the criteria for an AI Incident. The AI system's use is not speculative or potential but actively contributing to the harm.
Thumbnail Image

Promesas reales y trampas digitales: el increíble caso de la falsa princesa Leonor que casi cuesta 100.000 dólares - Valencia Noticias

2024-12-04
Valencia Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies (deepfakes and advanced tools) to impersonate a public figure and deceive victims, leading to actual financial losses. This meets the definition of an AI Incident because the AI system's use in the scam directly leads to harm (economic loss) to individuals. The harm is realized, not just potential, and involves violation of trust and exploitation of human vulnerability through AI-generated content.
Thumbnail Image

Leonor y Latinoamérica: así es la nueva estafa que involucra a la princesa de Asturias

2024-12-04
epe.es
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools are used to alter images and audio to impersonate a public figure, leading to victims losing money through fraudulent schemes. This is a direct harm to people (financial injury) caused by the malicious use of AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (financial loss) to individuals. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the deception.
Thumbnail Image

Una estafa en TikTok usa a la princesa Leonor generada con IA para robar miles de euros en Latinoamérica

2024-12-03
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The scam involves the use of AI-generated content (videos and voice) to impersonate a public figure, which directly leads to financial harm (loss of hundreds of euros) to individuals. The AI system's use in generating realistic fake videos and voices is pivotal to the scam's success and the resulting harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people (financial injury) through deception and fraud.
Thumbnail Image

Una estafa en TikTok usa a la princesa Leonor generada con IA para robar miles de euros en Latinoamérica

2024-12-03
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake videos and voices impersonating a public figure, which are then used to perpetrate a scam causing direct financial harm to victims. The AI-generated content is pivotal in making the scam convincing and effective. The harm (financial loss) has already occurred, fulfilling the criteria for an AI Incident. The involvement of AI in the scam's execution and the direct harm to people justify this classification.