Deepfake AI Video of President Jokowi Causes Public Misinformation in Indonesia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video using AI technology falsely depicted President Joko Widodo delivering a speech in Mandarin, misleading the public and causing misinformation. Indonesian officials confirmed the video was manipulated and warned about the broader risks of deepfake AI in spreading disinformation and undermining trust in information sources.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article centers on the threat posed by generative AI deepfake technology to the integrity of information, which can plausibly lead to harm to communities through misinformation and social disruption. Although no concrete harm is reported as having occurred yet, the credible risk of such harm is emphasized, fitting the definition of an AI Hazard. The mention of upcoming ethical guidelines is complementary but does not override the primary focus on the plausible future harm from deepfakes. Therefore, the event is best classified as an AI Hazard.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyRespect of human rights

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Kecanggihan Deepfake Jadi Ancaman Serius Arus Informasi

2023-12-07
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article centers on the threat posed by generative AI deepfake technology to the integrity of information, which can plausibly lead to harm to communities through misinformation and social disruption. Although no concrete harm is reported as having occurred yet, the credible risk of such harm is emphasized, fitting the definition of an AI Hazard. The mention of upcoming ethical guidelines is complementary but does not override the primary focus on the plausible future harm from deepfakes. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Kominfo Toleransi Deepfake untuk Konten Hiburan: Tidak Ada Niat Buruk

2023-12-07
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article centers on the government's approach to AI deepfake content, particularly entertainment deepfakes without malicious intent, and the potential for blocking harmful content. There is no indication that any AI system's use has directly or indirectly caused harm as defined (injury, rights violations, disruption, or harm to communities). Instead, it provides complementary information about governance and societal responses to AI deepfake content. Therefore, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Kominfo Ungkap SE Pedoman AI Untuk Mencegah Penyalahgunaan

2023-12-09
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI and deepfake technology) and addresses concerns about their misuse and potential harms such as misinformation. However, the article focuses on the preparation of guidelines and ethical frameworks to prevent such misuse, which is a governance and policy response. There is no indication that an AI incident or harm has occurred yet, nor that a specific AI hazard event has materialized. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related risks.
Thumbnail Image

Hati-hati Bahaya 'Deepfake', Si Penyebar Kekacauan Informasi

2023-12-05
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generative AI models) to create manipulated content that can mislead the public and disrupt social and political order. Although the article does not report a specific incident of harm occurring, it highlights the serious potential for such harm, including misinformation and manipulation of public perception. Therefore, this qualifies as an AI Hazard because the development and use of deepfake AI systems could plausibly lead to significant harm to communities by spreading false information and undermining trust in information sources.
Thumbnail Image

Kominfo Ingatkan Bahaya Deepfake AI, Sang Pengacau Informasi

2023-12-06
jatim.viva.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI and deepfake technology) and their potential to cause harm by spreading misinformation and creating confusion. However, it only warns about plausible future harms without describing any realized harm or specific incident. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm (information disorder) but no actual incident is reported.
Thumbnail Image

Video Pidato Mandarin Jokowi Telan Korban, Kominfo Bongkar

2023-12-08
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as a deepfake created using AI technology, which has misled the public and caused misinformation, a form of harm to communities. The AI system's use (deepfake generation) directly led to this harm. The article also includes official responses and warnings, but the main event is the AI Incident of the deepfake video causing misinformation. Hence, the classification is AI Incident.