White House Uses AI-Generated Deepfake of Ukrainian Actress in Political Ad

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The White House published an Instagram video promoting a tax bill, featuring an AI-generated likeness of Ukrainian actress Antonina Khizhnyak without her consent. The deepfake, likely created using publicly available images, raises concerns about unauthorized use of personal likenesses in political communication and potential violations of personal and intellectual property rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to create a video featuring a digital representation of a real person without permission, which can be considered a violation of personal rights and possibly intellectual property rights. The lack of disclosure about AI usage and the unauthorized use of the actress's likeness constitutes harm related to rights violations. Therefore, this qualifies as an AI Incident due to the realized harm of unauthorized use of personal likeness through AI-generated content.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
OtherGeneral public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"Мотрю, це ти?" Білий Дім використав для презентації законопроєкту Трампа відео з зіркою "Спіймати Кайдаша"

2025-06-26
OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic video of a real person without consent, which implicates AI development and use leading to a violation of personal rights (a form of human rights). Although no direct harm or incident is reported as having occurred, the unauthorized use of AI-generated likenesses in political communication is a recognized risk and harm to individual rights. Since the article focuses on the event of AI-generated content use and the associated risks rather than a realized harm or ongoing incident, it fits best as Complementary Information about AI-related ethical and legal concerns.
Thumbnail Image

Білий дім випадково використав обличчя української акторки

2025-06-26
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a video featuring a digital representation of a real person without permission, which can be considered a violation of personal rights and possibly intellectual property rights. The lack of disclosure about AI usage and the unauthorized use of the actress's likeness constitutes harm related to rights violations. Therefore, this qualifies as an AI Incident due to the realized harm of unauthorized use of personal likeness through AI-generated content.
Thumbnail Image

Цифровий двійник акторки Антоніни Хижняк з'явився у рекламі Білого Дому

2025-06-26
ZN.UA
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI creating a deepfake video) was used to produce a realistic digital double of a real person without consent, which constitutes a violation of personal rights and potentially intellectual property rights. The unauthorized use of the actress's likeness in political communication can be considered a violation of rights under applicable law, fulfilling the criteria for an AI Incident. The harm is realized as the deepfake was publicly disseminated and the actress's image was used without permission, impacting her rights and potentially misleading the public. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Білий Дім опублікував відео з українською акторкою Антоніною Хижняк

2025-06-26
ZAXID.NET
Why's our monitor labelling this an incident or hazard?
The video uses an AI model to create a digital copy of the actress, likely trained on publicly available images, which is a clear involvement of AI systems (generative AI/deepfake technology). The use is without the actress's consent and in a political context, which could constitute a violation of personal rights and possibly intellectual property rights. However, since no actual harm or legal action has been reported yet, and the article mainly discusses the potential implications and calls for expert attention, this situation represents a plausible risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Білий дім "показав" у своїй рекламі відому українську акторку, але є один нюанс (відео)

2025-06-26
ФОКУС
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate or manipulate video content featuring a likeness of a real person without explicit consent, which can be considered a violation of rights (intellectual property or personal rights). However, since no direct harm or legal consequences have been reported, and the actress has not publicly responded, the event currently represents a plausible risk of harm rather than a realized harm. Therefore, it fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to harm such as reputational damage or rights violations if further consequences arise.
Thumbnail Image

Білий дім оприлюднив відео з Антоніною Хижняк: реакція української акторки- Афіша bigmir)net

2025-06-27
Afisha
Why's our monitor labelling this an incident or hazard?
Although the article mentions speculation about AI involvement in creating the video, the actress's statement clarifies that the video was sourced from stock footage, not generated or manipulated by AI. There is no indication of harm or potential harm caused by AI, nor any misuse or malfunction. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and clarifies misinformation about AI use in media content.
Thumbnail Image

Білий дім. Антоніна Хижняк відреагувала на відео зі своїм обличчям

2025-06-27
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating the video using the actress's face, which fits the definition of an AI system's use. However, the event does not describe any actual harm or violation resulting from this use, only public discussion and a humorous reaction. There is no evidence of injury, rights violation, or other harms as defined. The lack of AI usage labeling is noted but does not itself constitute an incident or hazard without further harm. Therefore, this is best classified as Complementary Information, providing context and societal response to AI-generated content.
Thumbnail Image

Мотря зі "Спіймати Кайдаша" стала зіркою ролику Білого дому: Тоня Хижняк показала, як насправді потрапила на відео - Hochu.ua

2025-06-27
News.Hochu.ua
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a video with a synthetic likeness of a real person without consent, which is a misuse of AI technology leading to a violation of rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations intended to protect fundamental rights (image and personality rights). The harm is realized as the actress's image was used without permission and without disclosure, which is a clear violation of rights. Therefore, this is classified as an AI Incident.