Hungarian PM Orban Publishes AI-Generated Deepfake Targeting Zelensky

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hungarian Prime Minister Viktor Orban released an AI-generated deepfake video depicting Ukrainian President Volodymyr Zelensky promoting a fake spy app linked to Hungary's opposition party. The video, intended to discredit Zelensky and the opposition, exemplifies the use of AI for political misinformation and manipulation in Hungary.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (generative AI for deepfake video creation) to produce and spread false political content that directly harms communities by undermining political processes and trust. The article describes the actual use of the AI-generated video in a political context, indicating realized harm rather than just potential harm. Therefore, it meets the criteria for an AI Incident due to violations of rights and harm to communities caused by AI-generated misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomySafety

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
GovernmentGeneral publicCivil society

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

OVO NE SMEJU DA POKAŽU ZELENSKOM! Orban ga urnisao lažnim videom, ovo čovek mora da vidi da bi poverovao! - Alo.rs

2025-10-11
alo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI for deepfake video creation) to produce and spread false political content that directly harms communities by undermining political processes and trust. The article describes the actual use of the AI-generated video in a political context, indicating realized harm rather than just potential harm. Therefore, it meets the criteria for an AI Incident due to violations of rights and harm to communities caused by AI-generated misinformation.
Thumbnail Image

Viktor Orban objavio video u kojem Zelenski predstavlja telefon sa špijunskom aplikacijom

2025-10-11
Tanjug News Agency
Why's our monitor labelling this an incident or hazard?
The video is explicitly stated to be AI-generated, involving an AI system capable of generating realistic fake video content (deepfake). The content is used to falsely accuse and manipulate public opinion, which constitutes harm to communities through misinformation and political manipulation. Since the AI-generated video is actively disseminated and used to influence political narratives, this is an AI Incident involving harm to communities through misinformation and potential violation of rights related to truthful information.
Thumbnail Image

ORBAN NAJSTRAŠNIJE UDARIO NA ZELENSKOG! Objavio špijunski video snimak, a onda ga optužio za najgoru stvar

2025-10-11
Republika.rs | Srpski telegraf
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic video (deepfake) that falsely represents a political leader in a harmful context. The video is used to accuse and discredit political actors, potentially influencing public opinion and political outcomes. This constitutes harm to communities through misinformation and political destabilization, fitting the definition of an AI Incident as the AI system's use has directly led to harm in the form of misinformation and political manipulation.
Thumbnail Image

Orban poslao poruku Zelenskom (VIDEO)

2025-10-12
Glas javnosti
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is reasonably inferred from the mention of a "generated video" featuring a political figure speaking in an unusual manner, which suggests deepfake or generative AI technology. While the video could plausibly be used to influence political opinions or elections, the article does not describe any actual harm or incident caused by this AI-generated content. Therefore, it fits the category of Complementary Information, as it provides context on AI-generated political content and related political claims without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Magyar: Orbán egy ócska, piti csaló szintjére süllyedt | 24.hu

2025-10-11
24.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create falsified videos (deepfakes) that are used to deceive the public about political matters. This involves the use of an AI system (generative AI for video synthesis) whose outputs directly lead to harm by spreading misinformation and violating the public's right to truthful information. The harm to communities and violation of rights are realized, not just potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Magyar Péter: Orbán egy ócska, piti csaló szintjére süllyedt

2025-10-11
hvg.hu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating manipulated video content (deepfake) used in political communication. The AI-generated video has directly led to reputational harm and misinformation, which can be classified as harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and political manipulation.
Thumbnail Image

Orbán MI videója kicsapta a biztosítékot Magyarnál

2025-10-11
Blikk
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated video posted on a political figure's page that falsely depicts another political figure (Volodimir Zelenszkij) endorsing a political app, which is part of a propaganda campaign. The AI system's use in creating manipulated video content directly leads to misinformation and reputational harm, which falls under harm to communities and violations of rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated misleading content.
Thumbnail Image

Magyar bepipult: "egy ócska, piti csaló...

2025-10-11
Ingatlanbazár Blog
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI to create a video, which qualifies as an AI system involvement. However, there is no indication that the AI-generated video caused any injury, rights violations, disruption, or other harms as defined in the AI Incident criteria. The main focus is on the political commentary and reactions to the video, not on harm or potential harm caused by the AI. Therefore, this event is best classified as Complementary Information, as it provides context and societal response related to AI-generated content without describing an AI Incident or AI Hazard.
Thumbnail Image

Magyar Péter a Tiszaphone-videóról: Orbán egy ócska, piti csaló szintjére süllyedt | szmo.hu

2025-10-11
szeretlekmagyarorszag.hu
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to create a fake video (deepfake) that misrepresents public figures and is used by a political leader to influence public perception. The AI system's use here directly leads to harm by spreading false information and manipulating citizens, which fits the definition of an AI Incident due to violation of rights and harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

Magyar Péter: Orbán egy ócska, piti csaló szintjére süllyedt

2025-10-11
telex
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fabricated video (deepfake) that misrepresents people and spreads false information. This AI-generated content was deployed in a political campaign context to defame and discredit opponents, which is a violation of rights and causes harm to communities by spreading misinformation. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct use of AI-generated misleading content causing harm.
Thumbnail Image

Magyar Péter: Orbán a hamisított videóival egy ócska, piti csaló szintjére süllyedt

2025-10-11
Magyar Hang
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the video is described as generated by artificial intelligence. The AI-generated deepfake video is used in a political campaign to spread falsehoods, which directly harms the community by misleading the public and violating rights to truthful information. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and political manipulation.
Thumbnail Image

Magyar Péter szerint Orbán egy ócska, piti csaló szintjére süllyedt

2025-10-12
klubradio.hu
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated video (deepfake) that falsely depicts political figures in misleading scenarios. The video was publicly disseminated by a political leader, causing reputational damage and spreading misinformation. This constitutes an AI Incident because the AI-generated content directly led to harm in the form of misinformation and reputational harm, impacting public discourse and potentially violating rights to truthful information.