AI-Generated Deepfake Video of German Chancellor Sparks Government Alarm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The satirical art group Zentrum für Politische Schönheit used AI to create a deepfake video of Chancellor Olaf Scholz falsely announcing a ban on the AfD party. The realistic video spread online, prompting government concern over manipulation and misinformation, and leading to the formation of a task force to address AI-driven disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to create a deepfake video that spreads false political information, which can plausibly lead to harm by manipulating public opinion and increasing misinformation. Although no direct harm has been reported yet, the government's warning and the context indicate a credible risk of harm to communities and democratic integrity. Therefore, this event fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to an AI Incident involving harm to communities through misinformation and manipulation.[AI generated]
AI principles
Transparency & explainabilityAccountabilityDemocracy & human autonomyRobustness & digital securitySafetyPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interestPsychological

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Satireaktion mit gefälschtem Scholz-Video verärgert Bundesregierung

2023-11-27
Yahoo!
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video that spreads false political information, which can plausibly lead to harm by manipulating public opinion and increasing misinformation. Although no direct harm has been reported yet, the government's warning and the context indicate a credible risk of harm to communities and democratic integrity. Therefore, this event fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to an AI Incident involving harm to communities through misinformation and manipulation.
Thumbnail Image

Bundesregierung verärgert über Satireaktion

2023-11-27
tagesschau.de
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video, which is an AI system's use leading to the dissemination of false political information. This constitutes a violation of rights related to misinformation and manipulation of public opinion, harming communities by spreading disinformation. Since the harm (manipulation and misinformation) is occurring, this qualifies as an AI Incident. The event involves the use of AI (deepfake generation) causing realized harm through misinformation and public confusion, not just a potential risk.
Thumbnail Image

Kritik an Satire-Aktion: Deepfake-Video mit Scholz erzürnt Bundesregierung

2023-11-27
N-tv
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video that falsely portrays a political figure, which has been publicly disseminated and criticized by government officials for its manipulative nature and potential to mislead the public. The harm is realized as the video spreads misinformation, undermining public trust and potentially influencing political discourse, which aligns with harm to communities and violations of rights. The involvement of AI in creating the video and the direct impact on public perception and political stability meet the criteria for an AI Incident.
Thumbnail Image

Zentrum für politische Schönheit: Bundesregierung kritisiert gefälschte Videoansprache von Olaf Scholz

2023-11-27
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated content. The video is used to spread false information, which can plausibly lead to harm by manipulating public opinion and causing social disruption. However, the article does not report actual realized harm such as injury, rights violations, or operational disruption, but rather the potential for such harm through misinformation. Therefore, this qualifies as an AI Hazard because the AI-generated deepfake could plausibly lead to an AI Incident (harm to communities through misinformation), but no direct harm has yet been reported. The government's response and concern further support the classification as a hazard rather than an incident or complementary information.
Thumbnail Image

Gefälschtes Scholz-Video sorgt für Ärger in Berlin

2023-11-27
newsORF.at
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate a fake video that misrepresents a political figure and spreads false information. This use of AI has directly led to harm by manipulating public opinion and creating confusion, which fits the definition of an AI Incident under harm to communities. The harm is realized, not just potential, as the video was publicly disseminated and caused governmental concern. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Olaf Scholz: Regierungssprecher hält Kanzler-Fake-Video für ernste Angelegenheit - WELT

2023-11-27
DIE WELT
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake technology) is clearly involved, as the video uses AI-generated synthetic media to impersonate the Chancellor. The event stems from the use of AI (deepfake creation) and raises concerns about misinformation and public confusion, which could plausibly lead to harm to communities or rights if such disinformation spreads widely. However, the article focuses on the government's response and the potential risks rather than any realized harm. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future harm from AI-generated deepfakes and misinformation, rather than an AI Incident or Complementary Information.
Thumbnail Image

Zentrum für politische Schönheit mit Scholz-Fake gegen AfD

2023-11-27
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a manipulated video that falsely attributes statements to a political figure. The video is actively circulating and causing misinformation, which harms public discourse and trust, thus meeting the criteria for harm to communities. The involvement of AI in generating the deepfake is direct and central to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wirbel um KI-Fake-Video mit falschem Olaf Scholz

2023-11-28
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) used to create a fake video of a public figure. While the video is manipulative and could plausibly lead to harm such as misinformation and public confusion (harm to communities), the article does not confirm that such harm has materialized yet. Therefore, this situation represents a plausible risk of harm from AI misuse rather than a realized incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"Schüren Verunsicherung und sind manipulativ": Sprecher der Bundesregierung äußert sich verärgert über gefälschtes Scholz-Video

2023-11-27
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video that is highly realistic and manipulative, which is explicitly acknowledged by the government spokesperson. The event involves the use of AI-generated content to influence public opinion through misinformation, which can harm communities and democratic discourse. While the harm is not explicitly stated as having occurred, the event clearly illustrates a credible risk of such harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm through manipulation and misinformation, but no direct harm has yet been confirmed or reported in the article.
Thumbnail Image

Initiative will AfD-Verbot mit Installation und Kanzler-Deep-Fake voranbringen - Regierung reagiert verschnupft

2023-11-27
stern.de
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generation using AI) is explicitly involved in creating a realistic but fake video of a political figure. The use of this AI-generated content is intended to influence public opinion and could plausibly lead to harm by spreading misinformation and undermining trust in public information. However, the article does not describe any realized harm such as injury, rights violations, or disruption, only the potential for such harm. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving misinformation and manipulation of public opinion.
Thumbnail Image

Gefälschtes Scholz-Video: Satireaktion verärgert Bundesregierung

2023-11-27
RP Online
Why's our monitor labelling this an incident or hazard?
The event involves a manipulated video (likely AI-generated or AI-assisted deepfake) of a political figure, which is causing misinformation and public confusion. This constitutes harm to communities by spreading false information and undermining trust. Since the manipulated video is actively circulating and causing concern, this is a realized harm linked to the use of AI technology for video manipulation, qualifying it as an AI Incident.
Thumbnail Image

Täuschend echtes Video: Deutscher Kanzler Scholz verbietet die AfD

2023-11-27
Die Presse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video, which is a clear AI system application. The use of this AI-generated content has directly led to harm by spreading false political information, causing public confusion and potential destabilization of democratic discourse, which qualifies as harm to communities. The government's reaction confirms the seriousness of the impact. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kanzler-Fake-Video - Regierungssprecher spricht von ernster Angelegenheit

2023-11-27
Deutschlandfunk
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create realistic fake video and audio content. Although the video is satirical and the government acknowledges it as such, the use of such realistic AI-generated disinformation poses a credible risk of harm to communities by spreading false information. Since no direct harm is reported as having occurred yet, but the government is taking preventive actions, this fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm. It is not Complementary Information because the main focus is on the disinformation event itself, not just a response or update. It is not an AI Incident because no actual harm has been reported as realized.
Thumbnail Image

Deepfake-Video von Olaf Scholz: Bundesregierung ist alarmiert

2023-11-28
Berliner Zeitung
Why's our monitor labelling this an incident or hazard?
The deepfake video is generated using AI technology (deep learning for video and audio synthesis), which qualifies as an AI system. The event involves the use of AI-generated content to spread false information, which could plausibly lead to harm by misleading the public and undermining trust in government communications. Although the video is currently a satirical art piece and no direct harm is reported, the government's concern and formation of a task force indicate recognition of the plausible future harm from such AI-generated disinformation. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities through misinformation.
Thumbnail Image

Satire für AfD-Verbot: Gefälschtes Scholz-Video verärgert die Bundesregierung

2023-11-27
swp.de
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The video is fake and is spreading misinformation, which can harm public trust and political stability, thus harming communities indirectly. However, the article does not report actual realized harm such as injury, rights violations, or operational disruption, but rather the potential for manipulation and misinformation that could lead to harm. Therefore, this event is best classified as an AI Hazard, as the AI-generated deepfake could plausibly lead to significant harm if widely believed or used maliciously, but no direct harm has yet been confirmed.
Thumbnail Image

Satireaktion mit gefälschtem Scholz-Video ärgert deutsche Regierung

2023-11-27
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a form of AI-generated content. The video spreads false information that could influence public opinion, which is a recognized harm related to misinformation. However, the article does not report actual harm occurring yet, only government concern and the potential for harm. The government's response and the formation of a working group to address disinformation classify this as complementary information about societal and governance responses to AI-related misinformation. Since no direct or indirect harm has materialized, and the main focus is on the reaction and potential future risks, this event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Es geht um ein AfD-Verbot: Ärger um manipuliertes Scholz-Video

2023-11-27
Donaukurier
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video (deepfake) of a political figure, which is a clear AI involvement. The event stems from the use of AI to create manipulated content that spreads false information, which could plausibly lead to harm by influencing public opinion and undermining trust. However, the article does not describe any direct or indirect realized harm yet, only the potential for harm and governmental concern. Therefore, this qualifies as an AI Hazard, as the AI-generated video could plausibly lead to an AI Incident involving misinformation and manipulation, but no actual harm has been reported so far.
Thumbnail Image

Deepfake-Video von Kanzler Scholz sorgt für Empörung

2023-11-27
Westfalenpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a realistic fake video of a political figure. The AI-generated content has been disseminated online, causing public and governmental reaction. The harm here is indirect but materialized: the video spreads false information that can undermine political stability and public trust, which qualifies as harm to communities. The article states the video is already circulating and causing concern, so harm is occurring, not just potential. Therefore, this is an AI Incident rather than a hazard or complementary information. The satirical intent does not negate the harm caused by the AI-generated misinformation.
Thumbnail Image

Kunstinstallation: Deepfake-Scholz verkündet AfD-Verbot

2023-11-27
Zweites Deutsches Fernsehen
Why's our monitor labelling this an incident or hazard?
The use of deepfake technology clearly involves an AI system generating synthetic audio-visual content. While the deepfake could plausibly lead to harm by misleading the public or causing political confusion, the article frames it as an art installation without evidence of actual harm occurring. Therefore, it represents a plausible risk of harm rather than a realized incident. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (misinformation, political disruption), but no direct or indirect harm is reported as having occurred yet.
Thumbnail Image

Manipuliertes Scholz-Video: "Solche Deepfakes sind kein Spaß"

2023-11-27
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a deepfake video, which is an AI-generated manipulated video content. The deepfake is used to spread false information, which can cause harm by misleading the public and undermining trust in political figures and institutions. This constitutes harm to communities through misinformation and manipulation. Since the manipulated video is actively disseminated and causing governmental concern about its manipulative nature, this qualifies as an AI Incident due to realized harm from misinformation and manipulation affecting societal trust and political discourse.
Thumbnail Image

Fake-Video verärgert Regierung: Olaf Scholz verkündet (nicht) das Verbot der AfD

2023-11-27
MOPO.de
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake technology) was used to create manipulated video content that falsely represents a public figure. This use of AI-generated misinformation can cause harm by spreading false information, undermining trust, and potentially disrupting political discourse, which constitutes harm to communities. Since the manipulated video is actively disseminated and has provoked official criticism, the harm is realized rather than merely potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through misinformation and manipulation.
Thumbnail Image

Will Scholz wirklich die AfD verbieten? Rätselhafte "Ansprache" von Bundeskanzler im Netz

2023-11-28
inFranken.de
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generation software) was used to create a fabricated video of a political figure, which is a clear example of AI involvement. Although the video is fake and no direct harm is reported as having occurred, the use of AI to produce misleading political content poses a credible risk of harm to communities through misinformation and political manipulation. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident if the misinformation causes significant societal harm. It is not an AI Incident yet because no actual harm is reported as having occurred at this stage, nor is it merely complementary information or unrelated news.