AI Deepfake Scam Impersonates George Clooney

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An Argentine woman was deceived over a month-long online interaction when a scammer used AI-generated deepfake videos and verified profiles to impersonate George Clooney. Believing the persona was real, she lost over $15,000 through fraudulent requests for fan club access and job opportunities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI was used to create videos and voice of George Clooney to deceive the victim, which directly caused financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial loss). The involvement of AI in generating realistic fake content was pivotal to the scam's success and the resulting harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityPrivacy & data governanceRespect of human rightsSafetyRobustness & digital securityAccountabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Argentina creyó que hablaba con George Clooney y terminó perdiendo 15 mil dólares: era IA

2025-05-13
BioBioChile
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create videos and voice of George Clooney to deceive the victim, which directly caused financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial loss). The involvement of AI in generating realistic fake content was pivotal to the scam's success and the resulting harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Una argentina fue engañada por un falso George Clooney creado con IA y perdió 15 mil dólares

2025-05-12
Rosario3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create edited videos of George Clooney's face speaking to the victim, which is a clear example of an AI system generating deceptive content. The victim was financially defrauded as a direct consequence of this AI-generated impersonation. Therefore, this event meets the criteria for an AI Incident because the AI system's use directly led to harm (financial loss) to a person.
Thumbnail Image

Insólito: Mujer fue estafada por un hombre que se hizo pasar por George Clooney usando IA, perdió miles de dólares

2025-05-13
Noticias de Venezuela y el Mundo - Caraota Digital
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI-generated videos were used to impersonate George Clooney, enabling a scam that caused the victim to lose approximately $15,000. The AI system's role in generating fake videos was pivotal in deceiving the victim, leading to financial harm. This fits the definition of an AI Incident, as the AI system's use directly led to harm to a person (financial loss).
Thumbnail Image

Fue estafada con inteligencia artificial por un falso George Clooney y perdió más de 15 mil dólares

2025-05-14
Perfil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to create a false identity of George Clooney, which was used to deceive and defraud the victim. The AI system's use directly led to financial harm to the victim, fulfilling the criteria for an AI Incident under harm to a person. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Una mujer argentina creyó que hablaba con George Clooney y perdió 15 mil dólares; cómo fue que la engañaron

2025-05-14
Diario EL PAIS Uruguay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to recreate the image and voice of George Clooney, which was used to deceive the victim into transferring money. This constitutes direct harm to the person (financial loss) caused by the use of an AI system in the scam. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.
Thumbnail Image

[VIDEO] ¡Perdió $15 mil! Creyó que George Clooney le escribía, pero era una estafa

2025-05-14
Red Uno
Why's our monitor labelling this an incident or hazard?
The event describes a complex scam using AI-generated videos to impersonate a celebrity, which directly caused financial harm to a person. The AI system's role in generating fake videos was pivotal in deceiving the victim and causing the loss of $15,000. This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (financial loss).
Thumbnail Image

Una argentina fue estafada por un falso George Clooney: perdió 15 mil dólares - Primera Edición

2025-05-14
Primera Edición
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create videos of George Clooney speaking to the victim, which were used to deceive her into transferring money. This constitutes direct involvement of an AI system in causing harm (financial loss) to a person. Therefore, this qualifies as an AI Incident under the definition of an event where the use of an AI system has directly or indirectly led to harm to a person.
Thumbnail Image

Una argentina fue estafada por un George Clooney creado con IA

2025-05-12
eldiariodecarlospaz.com.ar
Why's our monitor labelling this an incident or hazard?
The article describes a case where an AI-generated deepfake video was used to impersonate a celebrity and scam a woman out of over $15,000. The AI system's role in generating the realistic video was pivotal in enabling the fraud, which constitutes a clear harm to the victim (financial harm). Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in the scam.
Thumbnail Image

Una mujer argentina creyó que estaba de novia con George Clooney y perdió 15 mil dólares

2025-05-13
Diario Río Negro
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fake videos of a celebrity to deceive and defraud a person constitutes an AI system's involvement in causing harm. The harm here is financial loss due to the scam, which is a significant harm to the individual. Since the AI-generated content directly led to the victim being scammed, this qualifies as an AI Incident under the framework, specifically harm to a person (financial harm) caused by the AI system's outputs.
Thumbnail Image

Creyó que hablaba con George Clooney y perdió 15 mil dólares: así la engañaron con IA

2025-05-13
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate manipulated videos and synthetic voices to impersonate a famous person, which directly caused financial harm (loss of $15,000) to the victim. This fits the definition of an AI Incident because the AI system's use in the scam directly led to harm to a person (financial loss).
Thumbnail Image

'George Clooney' estafa 13.000 euros a una mujer argentina: Así es la última estafa hecha con IA

2025-05-14
Diario Sport
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake video of George Clooney to impersonate him and scam a woman out of 13,000 euros constitutes direct harm caused by the AI system's use. The AI-generated content was central to the deception and the resulting financial harm, fulfilling the criteria for an AI Incident under harm to persons. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Fue estafada por un falso George Clooney hecho con IA: perdió 15.000 dólares

2025-05-12
Clarin
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated images to impersonate a celebrity, which directly led to financial harm to the victim. The AI system's role was pivotal in creating convincing fake photos that facilitated the scam. This constitutes an AI Incident because the AI system's use directly caused harm to a person (financial loss) through deception and fraud.
Thumbnail Image

La mujer que perdió 15 mil dólares por un falso George Clooney realizado con IA

2025-05-13
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a fake George Clooney, which was used to scam the victim out of money. This involves the use of an AI system (likely a generative AI creating realistic images or messages) to impersonate a celebrity and deceive the victim, resulting in direct financial harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the scam.
Thumbnail Image

George Clooney la estafó, al menos uno falso hecho con inteligencia artificial; le sacó 300 mil pesos

2025-05-12
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a false identity (George Clooney) that was used to deceive and defraud a person, causing direct financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial loss and emotional manipulation).
Thumbnail Image

Una mujer argentina fue estafada con un falso George Clooney y perdió 15 mil dólares

2025-05-12
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create personalized videos that impersonate a real person, which directly led to financial harm (loss of $15,000) to the victim. The AI system's use in generating convincing fake content was pivotal in enabling the scam and the resulting harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the scam.
Thumbnail Image

Un falso George Clooney la estafó con 300 dólares

2025-05-12
Diario Uno
Why's our monitor labelling this an incident or hazard?
The article describes a clear case where an AI system was used to generate a fake video of George Clooney, which was instrumental in deceiving the victim and causing financial harm. The AI's role in creating the video directly contributed to the scam, fulfilling the criteria for an AI Incident due to harm to a person (financial loss).
Thumbnail Image

Un falso George Clooney estafó a una mujer que le transfirió 15 mil dólares

2025-05-12
Filo News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that artificial intelligence was used to recreate the figure of George Clooney in videos that convinced the victim to send money. The harm is direct financial loss due to deception enabled by AI-generated content. This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (financial harm).
Thumbnail Image

No era George Clooney: La estafa virtual que engañó a una mujer con su imagen

2025-05-12
Urgente 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a realistic fake video of George Clooney, which was used to deceive and defraud a person. This constitutes direct harm to the victim's property (financial loss) caused by the malicious use of AI-generated content. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI system's use in the scam.
Thumbnail Image

A una argentina la estafó un falso George Clooney y perdió 15 mil dólares

2025-05-12
TiempoSur
Why's our monitor labelling this an incident or hazard?
The incident clearly involves an AI system (AI-generated video) used maliciously to impersonate a celebrity and defraud a person, causing direct harm (financial loss). This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (financial harm).
Thumbnail Image

Insólito: un falso George Clooney estafó a una mujer por 15 mil dólares - Diario EL SOL

2025-05-12
Diario EL SOL
Why's our monitor labelling this an incident or hazard?
The article describes a scam where an AI-generated video was used to impersonate a celebrity, leading to a woman being defrauded of money. The AI system's use directly led to harm (financial loss), fitting the definition of an AI Incident involving harm to a person. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Falso George Clooney enamora a mujer argentina para estafarla, fue creado con Inteligencia Artificial - El Heraldo de México

2025-05-13
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate realistic videos and messages that deceived the victim into believing she was interacting with George Clooney, leading to a financial scam where she lost $15,000. This is a direct harm caused by the use of an AI system in the scam, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving deception and financial loss, with AI playing a central role in enabling the fraud.
Thumbnail Image

Una mujer pierde 15 mil dólares tras ser engañada por un falso George Clooney creado con IA

2025-05-13
Listin diario
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a fake persona (likely using AI-generated content or deepfake technology) to deceive and defraud a person, resulting in significant financial harm. The AI system's use directly led to the harm experienced by the victim. Therefore, this qualifies as an AI Incident under the definition of harm to a person caused by the use of an AI system.
Thumbnail Image

Me decía 'mi amor'": el falso George Clooney que le robó 15.000 dólares a una argentina

2025-05-13
La Nacion
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos and voice to impersonate a celebrity and deceive a victim into transferring money. The harm (financial loss due to fraud) has already occurred, and the AI system's role in generating realistic fake content was pivotal in enabling the scam. Therefore, this is an AI Incident as the AI system's use directly led to harm to a person.
Thumbnail Image

Un falso George Clooney hecho con IA la estafa y pierde más de 13.000 euros

2025-05-13
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake videos and images of George Clooney, which were used to deceive and defraud a person, causing significant financial harm. The AI-generated content directly contributed to the harm by enabling the impersonation and scam. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to a person (financial loss) and harm to the community (victim of fraud).
Thumbnail Image

Una argentina le transfirió US$300 a un hombre que se hizo pasar por George Clooney con un increíble video

2025-05-13
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article describes a clear AI Incident where AI-generated videos were used maliciously to impersonate a celebrity and defraud a victim of money. The AI system's outputs (deepfake videos) directly caused harm by enabling the scam. This fits the definition of an AI Incident as it caused harm to a person through malicious use of AI-generated content.
Thumbnail Image

Falso George Clooney creado con inteligencia artificial estafó a una mujer que perdió miles de dólares

2025-05-13
Ambito
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create realistic fake videos (deepfakes) of a celebrity to deceive and defraud a person, resulting in financial harm. The AI system's use directly led to harm (financial loss) to the victim, fitting the definition of an AI Incident due to harm to a person or group of people.
Thumbnail Image

Fue estafada por un falso George Clooney generado con IA y perdió 15.000 dólares

2025-05-13
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The use of AI-generated videos to impersonate a celebrity and deceive a victim into transferring money constitutes direct involvement of an AI system in causing harm. The harm here is financial loss due to fraud, which is a significant harm to the individual (harm to person). Since the AI system's outputs were central to the deception and consequent harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Una mujer argentina creyó que mantenía una relación con George Clooney, pero fue estafada por 15 mil dólares - MDZ Online

2025-05-13
mdz
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI-generated content (videos and voice) was used maliciously to impersonate a celebrity and scam a woman out of money. The AI system's outputs directly led to the harm (financial loss) experienced by the victim. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to a person through deception and fraud.
Thumbnail Image

Mujer pierde 15 mil dólares tras ser engañada con IA Clooney

2025-05-13
Periódico El Día
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated content (deepfake videos) to impersonate a public figure and deceive a victim, resulting in significant financial harm. The AI system's development and use directly led to the harm (financial loss) experienced by the victim, fitting the definition of an AI Incident under harm to persons (financial harm). Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Falso George Clooney creado con IA enamora y estafa a mujer argentina: perdió 15 mil dólares

2025-05-12
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated videos and voice impersonation to commit a romance scam, which directly caused financial loss and emotional harm to the victim. The AI system's role in creating realistic fake content was pivotal in enabling the fraud. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in the scam.
Thumbnail Image

Mujer sufre estafa de falso George Clooney que hablaba con IA

2025-05-13
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and voices to impersonate a celebrity, which directly led to financial harm (a form of harm to the person) through a scam. The AI system's use was instrumental in the deception and consequent monetary loss, fulfilling the criteria for an AI Incident as the harm has already occurred and the AI system's role is pivotal in causing it.
Thumbnail Image

Falso George Clooney generado por IA enamora y estafa a una mujer

2025-05-13
Excélsior
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake photos and messages impersonating a celebrity, which directly caused financial harm to the victim through fraud. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial loss) to a person. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the deception and resulting scam.
Thumbnail Image

Falso George Clooney generado por IA enamora y estafa a una mujer: 'Me dijo que me amaba' | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-05-13
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake photos and messages impersonating George Clooney, which were used to scam a woman out of her savings. The harm (financial loss and emotional manipulation) has already occurred, and the AI system's role was pivotal in enabling the fraud. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.
Thumbnail Image

Ingenua argentina es estafada por "actor George Clooney" creado con inteligencia artificial - La Nueva Radio YA

2025-05-12
La Nueva Radio YA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create deepfake videos and audios impersonating George Clooney. The AI-generated content was used maliciously to deceive the victim, leading to a direct financial loss of $15,000. This constitutes harm to a person through fraudulent use of AI-generated content, fitting the definition of an AI Incident due to the realized harm caused by the AI system's use in the scam.
Thumbnail Image

El drama de una inocente fan argentina: creyó que hablaba con George Clooney y fue estafada con IA

2025-05-12
Tiempo de San Juan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate synthetic voice and messages impersonating George Clooney, which was used maliciously to defraud a person. The AI system's use directly led to harm (financial loss and emotional distress) to the victim, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it enabled the sophisticated deception that caused the harm.
Thumbnail Image

George Clooney le pidió dinero y ella perdió más de 13.000 euros: "Me dijo que me amaba"

2025-05-13
Diario de Pontevedra
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used AI technology to simulate the voice and appearance of George Clooney in manipulated video calls, which directly facilitated the scam and the victim's financial loss. This constitutes an AI Incident because the AI system's use directly led to harm (economic loss and emotional harm) to the victim. The harm is realized, not just potential, and involves violation of trust and financial harm, fitting the definition of an AI Incident.
Thumbnail Image

Lo que temíamos se cumplió: un George Clooney generado por IA arruinó la vida de una mujer

2025-05-14
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake images and messages impersonating a famous person, which directly led to financial and emotional harm to the victim. This fits the definition of an AI Incident because the AI system's use directly caused harm to a person. The harm includes economic loss, emotional distress, and reputational damage, all of which are recognized harms under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

¡Otra estafa con IA! Mujer pierde $15 mil al ser engañada por falso 'George Clooney'

2025-05-15
TVN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos to impersonate George Clooney, which was central to the scam that caused financial and emotional harm to the victim. The AI system's outputs (deepfake videos) were used maliciously to deceive and manipulate the victim, leading to direct harm. This fits the definition of an AI Incident because the AI system's use directly led to injury (financial loss and emotional harm) and harm to the community (through fraudulent activity).