Deepfake Video Targets Ukrainian First Lady with False Claims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video falsely accusing Ukrainian First Lady Olena Zelenska of misusing US aid to buy a luxury car was circulated on social media. The video, created using AI technology, was quickly debunked by cybersecurity experts. The incident was linked to a former US police officer residing in Moscow.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves malicious use of deep-fake AI technology to fabricate a video for conspiratorial claims. The AI system directly produced harmful misinformation, causing reputational and informational harm; this fits the definition of an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
WomenGovernmentGeneral public

Harm types
ReputationalPublic interestPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases


Articles about this incident or hazard

Thumbnail Image

Ukrayna First Lady'sine komplo: Deep fake video kullanıldı

2024-07-03
NTV
Why's our monitor labelling this an incident or hazard?
The event involves malicious use of deep-fake AI technology to fabricate a video for conspiratorial claims. The AI system directly produced harmful misinformation, causing reputational and informational harm; this fits the definition of an AI Incident.
Thumbnail Image

Ukrayna First Lady'siyle ilgili olarak ortaya atılan Bugatti iddiası gündemi karıştırdı

2024-07-03
Haber 7
Why's our monitor labelling this an incident or hazard?
Deepfake technology (an AI system) was used to generate and spread false videos and invoices, resulting in defamation and the spread of misinformation. The AI’s malicious use directly produced harm to Zelenska’s reputation and public trust, meeting the criteria for an AI Incident.
Thumbnail Image

Ukrayna First Lady'sine komplo! Bugatti iddiası ortalığı karıştırdı

2024-07-03
Haber7.com
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of AI (deepfake technology) to create and disseminate false videos and supporting documents. This misuse directly led to reputational and informational harm through widespread disinformation about a public figure, meeting the criteria for an AI Incident.
Thumbnail Image

Ukrayna First Lady'si Olena'ya komplo

2024-07-03
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event describes an actual misuse of generative AI (deepfake technology) to produce and disseminate false content targeting a political figure. This disinformation has been shared widely, causing real reputational and societal harm, and is part of an ongoing propaganda campaign. Thus, it meets the criteria for an AI Incident.
Thumbnail Image

Ukrayna First Lady'sine deepfake komplosu

2024-07-04
aksam.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create fake videos and documents falsely accusing the First Lady of Ukraine of corruption. This use of AI directly led to reputational harm and misinformation, which falls under harm to communities and violations of rights. The harm is realized as the disinformation campaign has already spread and caused disruption. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

First Lady'nin adıyla Deep Fake komplosu!

2024-07-03
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system, to produce and spread false videos that misrepresent the First Lady's actions. This use of AI-generated content has directly caused harm by spreading misinformation and manipulating public perception, which fits the definition of an AI Incident involving harm to communities and violation of rights. The involvement of AI in the creation and dissemination of harmful content is clear and the harm is realized, not just potential.
Thumbnail Image

How Disinformation From a Russian AI Spam Farm Ended up on Top of Google Search Results

2024-07-09
Wired
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) to create and manipulate content for a coordinated propaganda network. This use has directly led to the spread of false narratives, undermining public trust and constituting harm to communities (misinformation). Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

How disinformation from a Russian AI spam farm ended up on top of Google search results

2024-07-10
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event describes realized harm from an AI system’s use—generative AI created false content and automated bots amplified it—leading to widespread disinformation and erosion of trust. This meets the definition of an AI Incident (harm to communities through false narratives).
Thumbnail Image

Deepfake targets Ukraine's first lady Olena Zelenksa with false claim she bought Bugatti

2024-07-09
CBS News
Why's our monitor labelling this an incident or hazard?
An AI system was used to create and disseminate a deepfake video that has already caused reputational and political harm by eroding support for Ukraine’s leaders. This is a realized harm—an AI Incident—rather than a potential hazard or merely contextual update.
Thumbnail Image

How Russian AI spam farms brought fake news to the top of Google search results - ExBulletin

2024-07-10
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI was used to create and manipulate content for thousands of fake articles by a network linked to Russian disinformation efforts. This AI-generated disinformation directly led to widespread dissemination of false narratives about a public figure, causing harm to communities by spreading misinformation and undermining trust in information sources. The AI system's use in generating and amplifying fake news content is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated disinformation.