Deepfake Video Falsely Portrays Indonesian Finance Minister, Spreads Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake technology was used to fabricate a video falsely showing Indonesia’s Finance Minister Sri Mulyani calling teachers a burden to the state. The viral video caused public outrage and reputational harm before being debunked by officials, highlighting the real-world dangers of AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI deepfake technology to create manipulated videos that have been used to deceive and defraud people, causing direct harm. The involvement of AI in generating these videos is clear, and the resulting harms include financial fraud and misinformation affecting public trust and individuals' rights. This meets the criteria for an AI Incident as the AI system's use directly led to violations of rights and harm to communities.[AI generated]
AI principles
AccountabilityRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Daftar Pejabat Korban Deepfake, Ada yang Jadi Jago 'Ngomong' Mandarin

2025-08-21
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create manipulated videos that have been used to deceive and defraud people, causing direct harm. The involvement of AI in generating these videos is clear, and the resulting harms include financial fraud and misinformation affecting public trust and individuals' rights. This meets the criteria for an AI Incident as the AI system's use directly led to violations of rights and harm to communities.
Thumbnail Image

Viral Video Menteri Sri Mulyani Sebut Guru Beban Negara Adalah Deepfake, Apa Itu?

2025-08-20
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create a manipulated video that falsely attributes a harmful statement to a public official. This has led to misinformation spreading among the public, which is a harm to communities and potentially a violation of rights. The AI system's use directly caused this harm. The article confirms the video is a deepfake and a hoax, indicating the harm has occurred. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viral Video Hoaks Sri Mulyani Sebut Guru Beban Negara, Apa Itu DeepFake? - Teknologi Katadata.co.id

2025-08-20
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated video content that falsely portrays a public figure making harmful statements. This has directly led to reputational harm and potential social harm through misinformation dissemination, which qualifies as harm to communities. Therefore, this is an AI Incident because the AI system's use has directly led to harm. The article does not merely discuss the technology or potential risks but reports on an actual harmful event caused by AI-generated content.
Thumbnail Image

8 Tips Mendeteksi Konten Deepfake

2025-08-20
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article discusses AI-generated deepfake technology and its potential to cause harm but focuses on detection tips and raising public awareness. It does not describe a concrete AI Incident or AI Hazard event but rather provides complementary information to help society understand and respond to AI-related risks. Therefore, it fits the definition of Complementary Information, as it supports understanding and mitigation without reporting a new harm or imminent risk.
Thumbnail Image

Siapakah Otak di Balik Deepfake dan Bagaimana Semuanya Dimulai

2025-08-20
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology to create a misleading video that caused public harm. The harm is realized as the misinformation affected the reputation of a public figure and caused public concern. The AI system's use in generating the deepfake is central to the incident. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Bahaya Penggunaan Deepfake Ini Kasus Deepfake di Indonesia

2025-08-20
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create manipulated videos that were disseminated to commit fraud and spread false information. These actions have directly led to financial harm to victims and misinformation affecting public trust, which are harms to individuals and communities. The involvement of AI in generating these deepfakes is clear and central to the incidents. The legal actions against perpetrators further confirm the realized harm. Hence, the events meet the criteria for AI Incidents as defined by the framework.
Thumbnail Image

Video Sri Mulyani yang Sebut Gaji Guru Beban Negara Ternyata "Deepfake", Apa Itu?

2025-08-20
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create a manipulated video that falsely attributes statements to a public official. This AI-generated content has been disseminated, leading to misinformation and potential reputational damage, which qualifies as harm to communities and a violation of rights. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

Apa Itu Deepfake? Disebut dalam Bantahan Menkeu Sebut Guru Beban Negara

2025-08-20
detiksumbagsel
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI in creating deepfake videos and the potential harms of misinformation caused by such technology. However, it does not describe a specific new incident where harm has occurred or a new hazard event. Instead, it provides background information and context about deepfakes, their AI basis, and their societal risks. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI-related risks without reporting a new incident or hazard.
Thumbnail Image

Video Sri Mulyani Tegaskan Jebakan Deepfake Semakin Sempurna, Pakar Ungkap Cara Mengenalinya

2025-08-20
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a fabricated video that falsely portrays a public figure making offensive statements. This has directly caused harm by misleading the public, damaging the reputation of the individual, and causing social unrest. The harm is realized and not merely potential. Hence, it meets the criteria for an AI Incident due to violations of rights and harm to communities resulting from the AI system's use.
Thumbnail Image

Penipuan AI Makan Banyak Korban, Kenali 4 Modus Deepfake

2025-08-22
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and audio used by criminals to impersonate individuals and deceive victims, resulting in significant financial fraud and extortion. These are direct harms caused by the use of AI systems in malicious ways, fulfilling the criteria for an AI Incident. The harms include financial loss and violation of rights, and the AI's role is pivotal in enabling these sophisticated scams.
Thumbnail Image

Jenis-jenis Deepfake yang Perlu Diketahui

2025-08-21
Tempo Media
Why's our monitor labelling this an incident or hazard?
The article describes the general risks and potential harms of deepfake AI technology but does not report a specific incident or event where harm has materialized or a particular hazard event. It focuses on the potential for misuse and the societal risks posed by deepfakes, which aligns with the definition of an AI Hazard. However, since it does not describe a concrete event or circumstance where harm has occurred or is imminent, it is best classified as Complementary Information, providing context and awareness about AI-related risks without reporting a new incident or hazard.
Thumbnail Image

6 Cara Mengenali Deepfake

2025-08-22
Tempo Media
Why's our monitor labelling this an incident or hazard?
The article describes the nature and risks of AI-generated deepfake content and how to identify it, which is informative and educational. It does not report a specific AI Incident (harm realized) or AI Hazard (plausible future harm event) but rather provides complementary information to help understand and mitigate potential harms from AI deepfakes. Therefore, it fits the category of Complementary Information as it enhances understanding of AI-related risks without describing a concrete incident or hazard.
Thumbnail Image

Menkeu Sri Mulyani Jadi Korban, Apa Itu Deepfake dan Bahayanya

2025-08-20
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated synthetic media (deepfake) that has been deployed to create and spread false information about a public official, causing reputational harm and misinformation to the community. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (misinformation and reputational damage). The article also discusses the broader risks and financial harms caused by deepfake technology, reinforcing the realized harm aspect. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Hoaks Sri Mulyani "Guru Beban Negara", Mafindo: Mudahnya Konten Palsu Mengadu Domba

2025-08-27
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated video content that has been disseminated widely, causing social harm and public outrage. This fits the definition of an AI Incident because the AI-generated content directly led to harm to communities by spreading false information and social discord. The article explicitly mentions the use of AI-generated deepfake content and the resulting harm, meeting the criteria for an AI Incident.
Thumbnail Image

MAFINDO: Video "deepfake" Menkeu bukti konten palsu mudah adu domba - ANTARA News Gorontalo

2025-08-27
ANTARA News Gorontalo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI creating deepfake video content) that has directly led to harm to communities by spreading false and divisive information, causing social disruption and public anger. The deepfake video is a clear example of AI-generated misinformation causing real-world harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and social order.
Thumbnail Image

MAFINDO sebut video "deepfake" Menkeu bukti konten palsu mudah adu domba - ANTARA News Megapolitan

2025-08-27
Antara News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake video generation) that has been used to create and spread false content, leading to harm to communities by inciting anger and social division. This fits the definition of an AI Incident because the AI-generated deepfake video has directly led to harm (social disruption and misinformation). The article also discusses responses and detection tools, but the primary focus is on the harm caused by the AI-generated deepfake content.
Thumbnail Image

MAFINDO: Video "deepfake" Menkeu bukti konten palsu mudah adu domba

2025-08-27
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake video created using AI technology that falsely portrays a public figure making inflammatory statements. This AI-generated content has been widely disseminated, causing social harm by misleading the public and fostering division. The harm to communities through misinformation and social disruption is direct and materialized. The involvement of AI in generating the deepfake video is clear, and the resulting harm fits the definition of an AI Incident under harm to communities. The article also discusses the challenges of AI misuse in spreading disinformation, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ancaman Disintegrasi di Era Digital: Lawan dengan Nalar Kolektif

2025-08-27
SINDOnews
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce manipulated content that has directly led to harm to communities by spreading disinformation and causing social disruption. This fits the definition of an AI Incident because the AI system's use has directly led to harm (harm to communities) through misinformation and manipulation. The article also mentions responses to this harm but the primary focus is on the harm caused by AI-generated disinformation.
Thumbnail Image

<em>Deepfake</em> Menkeu Sri Mulyani Bukti Mudahnya Konten Hoaks Jadi Alat Adu Domba |Republika Online

2025-08-27
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI creating deepfake video content) whose use has directly led to harm by spreading false information that damages social cohesion and public trust. This fits the definition of an AI Incident because the AI-generated deepfake video has caused harm to communities through misinformation and social disruption. The article also highlights the use of AI detection tools and societal efforts to mitigate these harms, but the primary focus is on the realized harm caused by the AI-generated deepfake content.