AI-Generated Deepfakes Used in Fraudulent Fundraising Scams in Russia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Russia, scammers used AI-generated deepfake videos and voice recordings of celebrities and military figures to solicit fraudulent donations ahead of Defender of the Fatherland Day. These AI-enabled schemes deceived individuals into giving money under false pretenses, resulting in financial harm and psychological manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI (neural networks) to generate deepfake videos and voices for scams. This constitutes the use of an AI system in a harmful way, causing direct harm to people through financial fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial loss and deception).[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Россиян предупредили о дипфейках перед Днем защитника Отечества

2026-02-21
Lenta.ru
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (neural networks) to generate deepfake videos and voices for scams. This constitutes the use of an AI system in a harmful way, causing direct harm to people through financial fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial loss and deception).
Thumbnail Image

Мошенники собирают "пожертвования" к 23 февраля, используя дипфейки

2026-02-21
РИА Новости
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (neural networks generating deepfake video and audio) used maliciously to create fraudulent content that leads to financial harm. The harm is realized as people are deceived into donating under false pretenses. Therefore, this is an AI Incident because the AI system's use directly leads to harm (fraud and financial loss) to individuals and communities.
Thumbnail Image

Россиянам рассказали о новой схеме мошенничества с помощью ИИ перед 23 Февраля - Новости на Вести.ru

2026-02-21
vesti.ru
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI and deepfake technology to create fake video and voice messages that are used to deceive people into giving money. This constitutes direct harm to individuals (financial harm) caused by the use of an AI system. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (fraud and financial loss).
Thumbnail Image

В России предупредили о дипфейках перед Днем защитника Отечества

2026-02-21
rossaprimavera.ru
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos and voice to impersonate known individuals for scams, which directly causes harm to victims through fraud and financial loss. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial harm and psychological manipulation).
Thumbnail Image

РИА Новости: мошенники используют ИИ-записи звезд к 23 Февраля

2026-02-21
Рамблер/финансы
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (neural networks generating deepfake voice and video) in the malicious use of AI to create fraudulent appeals. This use has directly led to harm by enabling scams that deceive people into giving money under false pretenses, which constitutes harm to individuals and communities. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content used in fraud.
Thumbnail Image

Мошенники собирают "пожертвования" к 23 февраля, используя дипфейки

2026-02-21
НОВОСТИ Mail.Ru
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfakes to impersonate trusted figures for fraudulent fundraising directly causes harm to individuals through financial scams. The AI system's outputs are pivotal in enabling the deception and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm caused by malicious use of AI systems.