Georgia-Based AI Deepfake Scam Defrauds €33 Million

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A criminal network in Georgia used AI-generated deepfakes of celebrities and financial experts to create fraudulent ads on Facebook and Google. This scam deceived thousands of savers in Europe and Canada, including pensioners and small business owners, resulting in nearly €33 million in losses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake videos, to impersonate famous individuals and deceive victims into fraudulent investments. This use of AI directly led to financial harm amounting to millions of dollars, fulfilling the criteria for an AI Incident due to realized harm caused by the AI system's malicious use.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Financial and insurance servicesMedia, social platforms, and marketingDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Deepfake online, truffa da 33 milioni contro pensionati e imprenditori

2025-03-06
tg24.sky.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake videos, to impersonate famous individuals and deceive victims into fraudulent investments. This use of AI directly led to financial harm amounting to millions of dollars, fulfilling the criteria for an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

Migliaia di risparmiatori perdono 33 milioni di euro a causa di una truffa realizzata con i deepfake

2025-03-07
Tiscali Innovazione
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfake videos to deceive people into fraudulent investments directly caused financial harm to thousands of savers, amounting to millions of euros lost. The AI system's role in creating realistic fake content was pivotal in enabling the scam. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the fraud.
Thumbnail Image

Deepfake: maxi truffa online da 33.000.000€, sei stato colpito?

2025-03-06
HTML.it
Why's our monitor labelling this an incident or hazard?
The use of deepfake technology, an AI system capable of generating realistic synthetic video and audio content, was central to the execution of this fraud. The AI-generated content directly facilitated the deception and financial harm to victims, fulfilling the criteria for an AI Incident due to the realized harm (financial loss) caused by the AI system's use. The event involves the use of AI in the commission of a crime that led to significant harm to individuals (financial harm to savers), thus qualifying as an AI Incident.
Thumbnail Image

attenti ai deepfake! le nuove truffe ora si generano con l'ia. la maxi frode scoperta...

2025-03-05
dagospia.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos and fake news content that directly led to financial harm to thousands of individuals through a criminal fraud scheme. The AI system's use in generating realistic fake advertisements and news was pivotal in enabling the scam, causing harm to property (financial assets) and communities. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Truffe da 33 milioni di euro con le pubblicità delle celebrità su Facebook - Notizie - Ansa.it

2025-03-05
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos of celebrities for fraudulent advertisements, which directly caused financial harm to victims. The AI system's involvement in generating deceptive content that facilitated the scam meets the criteria for an AI Incident, as it directly led to harm to people (financial injury).
Thumbnail Image

Georgia, sgominata rete di criminali: hanno messo a segno una maxi-truffa da 33 milioni di euro grazie ai deepfake

2025-03-05
tgcom24.mediaset.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create fraudulent content that directly caused significant financial harm to many victims. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (financial loss), which is a form of harm to communities and individuals. Therefore, this is classified as an AI Incident.
Thumbnail Image

Tg falsi e cripto fake, maxi truffa da 33 milioni. Cos'è il raggiro degli "skameri" e come evitarlo

2025-03-05
La Stampa
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-generated deepfake videos as a core component of the scam, which directly caused financial harm to victims. The harm is realized and significant (33 million euros lost). The AI system's role is pivotal in enabling the fraud through realistic fake content. This meets the criteria for an AI Incident because the AI system's use directly led to harm to people (financial injury).
Thumbnail Image

Truffa da 33 milioni di euro con finti annunci di celebrità

2025-03-05
Tiscali Notizie
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems, specifically deepfake technology, to create fraudulent content that directly led to significant financial harm to thousands of people. The AI system's use in generating fake celebrity endorsements was pivotal in deceiving victims and causing the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to people (financial injury) through deception and fraud.
Thumbnail Image

Revealed: the scammers who conned savers out of $35m using fake celebrity ads

2025-03-05
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The scammers used AI-generated deepfake videos and fictional news reports to deceive victims into fraudulent investment schemes, directly causing financial harm amounting to $35 million. The AI system's role in creating convincing fake content was pivotal in enabling the scam, leading to realized harm to property and individuals. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in the scam.
Thumbnail Image

Revealed: the scammers who conned savers out of $35m using fake celebrity ads

2025-03-05
the Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake videos, which are AI-generated synthetic media, to create fake celebrity adverts that were used to defraud thousands of people out of $35 million. This is a direct use of AI systems in the commission of harm. The harm is realized and significant, including financial loss and emotional distress, with some victims suffering severe consequences. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to persons and communities through fraudulent activities.
Thumbnail Image

Deepfakes, cash and crypto: how call centre scammers duped 6,000 people

2025-03-05
the Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create fake videos of celebrities promoting fraudulent cryptocurrency investment platforms. This AI-generated content directly led to widespread financial harm to over 6,000 victims, with losses totaling tens of millions of dollars. The AI system's role was central to the scam's success, as victims were deceived by the realistic AI-generated videos. Therefore, this qualifies as an AI Incident due to the direct and large-scale harm caused by the AI system's use in the scam.
Thumbnail Image

Deepfakes, cash and crypto: how call centre scammers duped 6,000 people

2025-03-05
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create realistic fake images and videos of celebrities, which were used to lure victims. Additionally, the scammers used specially built software that simulated live trading platforms, purportedly using AI technology for crypto trading, to convince victims of fake profits. These AI systems were instrumental in deceiving victims, leading to direct financial harm (loss of money) to thousands of people. This constitutes an AI Incident because the AI system's use directly led to significant harm to individuals (financial loss and psychological distress).
Thumbnail Image

The Guardian: Georgia scammers con thousands of savers out of $35M

2025-03-06
NEWS.am
Why's our monitor labelling this an incident or hazard?
The scammers used deepfake videos, which are AI-generated synthetic media, to create fake celebrity adverts promoting fraudulent investment schemes. This use of AI directly led to financial harm to thousands of victims, fulfilling the criteria for an AI Incident under harm to persons (financial injury) and harm to communities. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in enabling the scam.