AI-Generated Deepfakes Spread Misinformation in Politics and Conflict Zones

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos and images have been used to spread misinformation about political leaders, including Japan's Prime Minister and global figures, and in conflict zones like Gaza and Ukraine. These incidents have caused public confusion and reputational harm, prompting government discussions on countermeasures and detection technologies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI-generated deepfake videos and images that have been widely disseminated and have caused misinformation, confusion, and hate speech during an active conflict. The harms include misleading the public, inciting hatred, and exacerbating social tensions, which constitute harm to communities. The AI systems' use in generating and spreading these fake media is central to the incident. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomySafetyRespect of human rights

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interestPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

蔓延するフェイク動画、複雑化するイスラエルとハマスの状況

2023-11-12
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake videos and images that have been widely disseminated and have caused misinformation, confusion, and hate speech during an active conflict. The harms include misleading the public, inciting hatred, and exacerbating social tensions, which constitute harm to communities. The AI systems' use in generating and spreading these fake media is central to the incident. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

安倍・菅氏の生成AI偽動画も...「岸田首相偽動画」制作者から作り方学んだ男性が作成し投稿

2023-11-10
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of generative AI to create fake videos of political figures, which are then spread on social media. The harm includes misinformation, reputational damage, and potential social disruption, which fall under harm to communities and violations of rights. The AI system's role is pivotal as it enables the creation of realistic fake content that would be difficult to produce otherwise. Therefore, this qualifies as an AI Incident.
Thumbnail Image

生成AIによる偽動画・画像、パレスチナ紛争やウクライナ戦争で悪用

2023-11-10
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI) used to produce fake multimedia content that has been actively spread in conflict zones and globally. This has directly led to harm to communities by spreading misinformation, potentially influencing public opinion and escalating tensions. The AI's role is pivotal in generating realistic fake content that misleads viewers. Therefore, this qualifies as an AI Incident due to realized harm from AI-generated disinformation affecting communities and social stability.
Thumbnail Image

偽動画、対策「関係省庁で連携」 官房長官、首相関連投稿巡り

2023-11-13
神戸新聞
Why's our monitor labelling this an incident or hazard?
The article centers on governmental responses and planned countermeasures to AI-generated deepfake misinformation, which is a recognized risk. Since no specific harm has been reported as having occurred, and the focus is on coordination and future prevention, this qualifies as Complementary Information. It provides context and updates on societal and governance responses to AI-related risks without describing a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

首相偽動画 1時間で作成 20代男性 AI使いSNSに拡散 | 沖縄タイムス+プラス

2023-11-12
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create a fake video and audio of a public figure, which was then disseminated on social media. This directly leads to harm to communities by spreading misinformation and potentially damaging reputations, fulfilling the criteria for an AI Incident. The AI system's use in generating and spreading false content is central to the harm caused.
Thumbnail Image

偽動画対策「省庁で連携」 首相の被害受け - 琉球新報デジタル

2023-11-13
琉球新報
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI-generated fake videos (deepfakes) targeting the Prime Minister and other politicians, which can harm democratic processes and public trust. However, the article does not report that harm has already occurred or that the AI system's use has directly caused harm yet; rather, it discusses plans and intentions to counteract such threats. Therefore, this is a plausible future risk scenario involving AI-generated misinformation, qualifying as an AI Hazard. It is not an AI Incident because no realized harm or incident is described, nor is it Complementary Information since it is not an update on a past incident or a governance response already implemented.
Thumbnail Image

偽動画、対策「関係省庁で連携」 官房長官、首相関連投稿巡り

2023-11-13
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The article discusses the potential threat posed by AI-generated deepfake videos and the government's intention to coordinate responses and develop detection technologies. However, it does not describe a specific AI incident where harm has already occurred, nor does it report a near-miss or imminent risk event. Instead, it focuses on planned or ongoing efforts to address the issue, which fits the definition of Complementary Information as it provides context and governance response to AI-related risks without detailing a concrete incident or hazard.