AI Deepfake of Deceased Dictator Used to Influence Indonesian Election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Indonesian political party Golkar used AI to create a deepfake video of deceased dictator Suharto, urging voters to support their candidate in the 2024 election. The video, widely circulated on social media, sparked criticism for manipulating voters and undermining democratic integrity through AI-generated misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used to generate deepfake videos and audio, which are deployed in political campaigns to influence voters. The harm is realized in the form of manipulation of the electorate, misinformation, and potential undermining of democratic processes, which qualifies as harm to communities. The AI system's use directly leads to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to communities through political manipulation and misinformation.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomyRespect of human rightsAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Surprise comeback: Indonesian dictator Suharto 'resurrected' ahead of elections. Watch!

2024-02-12
WION
Why's our monitor labelling this an incident or hazard?
The video of Suharto 'resurrected' strongly suggests the use of AI-generated deepfake technology to simulate his appearance and speech. This involves an AI system generating synthetic content that could influence public opinion during elections. However, the article does not mention any direct harm or incidents resulting from this video, nor does it indicate any realized harm such as misinformation causing social disruption or legal violations. Therefore, this event represents a plausible risk of harm through potential misinformation or manipulation in the electoral process, qualifying it as an AI Hazard rather than an Incident.
Thumbnail Image

AI 'resurrects' long dead dictator in murky new era of deepfake electioneering | CNN

2024-02-12
CNN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos and audio, which are deployed in political campaigns to influence voters. The harm is realized in the form of manipulation of the electorate, misinformation, and potential undermining of democratic processes, which qualifies as harm to communities. The AI system's use directly leads to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to communities through political manipulation and misinformation.
Thumbnail Image

Indonesia Polls: AI 'Resurrects' Long Dead Dictator, Sparks Ethical Concerns | World News - Times of India

2024-02-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video that is being used to influence voters, which directly relates to the use of AI leading to harm in the form of potential voter manipulation and undermining democratic processes. The harm is realized as the video has been widely viewed and has prompted official warnings and public concern. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights related to electoral integrity. The ethical concerns and official responses further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

AI Used to Resurrect Dead Dictator to Sway Election

2024-02-13
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI (an AI system) to create a deepfake video that misleads voters by falsely representing a deceased political figure's endorsement. This use of AI directly leads to harm to communities by spreading misinformation and manipulating electoral outcomes, which can undermine democratic rights and processes. Therefore, it qualifies as an AI Incident due to realized harm caused by the AI system's use in political misinformation.
Thumbnail Image

AI 'resurrects' long dead leader in murky new era of deepfake electioneering

2024-02-12
Saudi Gazette
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to generate deepfake videos and audio that have been widely distributed and used for political campaigning, directly influencing voters and public opinion. The harm is realized in the form of misinformation, manipulation of voters, and potential undermining of democratic processes, which are harms to communities. The AI system's use in creating and spreading these deepfakes is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Deepfake Brings Back Indonesia's Dead Dictator for Upcoming Elections

2024-02-12
Tech Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video used for political propaganda, which has already been disseminated widely and influenced public discourse. The harm is realized as the deepfake manipulates voters and potentially distorts the democratic election process, which is a harm to communities and a violation of rights. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake electioneering sparks AI 'return' of controversial dictator

2024-02-14
ReadWrite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video used in political communication to influence an election. The use of AI-generated deepfakes to manipulate voters is a direct cause of harm to communities by distorting democratic processes and spreading misinformation. The harm is realized as the video has been widely viewed and is part of active electioneering, meeting the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

AI Has Been Used To Resurrect A Dead Politician To Sway Election Results In Indonesia - Wonderful Engineering

2024-02-14
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of an AI system to generate a deepfake video of a deceased political figure to sway election results. This manipulation of political content can mislead voters and distort democratic processes, which is a clear violation of rights and harm to communities. The harm is occurring as the video is already viral and influencing public opinion. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through misinformation and political manipulation.
Thumbnail Image

AI 'resurrects' long dead dictator in murky new era of deepfake electioneering - KION546

2024-02-12
KION546
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to create deepfake videos of a deceased political figure to influence an election. This use of AI has directly led to harm by manipulating voters and spreading misleading political propaganda, which affects the integrity of the electoral process and can harm societal trust and democratic rights. The harm is realized as the videos have already been widely viewed and criticized, indicating actual impact rather than just potential risk. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation in a political context.
Thumbnail Image

Indonesia CSOs fight "deep fake" elections

2024-02-15
thaipbsworld.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deep fake content being used in political campaigns to manipulate voters and spread misinformation, which constitutes harm to communities and a violation of democratic rights. The harm is realized as these clips have gone viral and influenced public perception during an ongoing election. The involvement of AI systems in generating these deep fakes is clear, and the resulting misinformation and manipulation meet the criteria for an AI Incident under the framework, as the AI use has directly led to harm. The article also describes responses and frameworks to mitigate these harms, but the primary focus is on the realized harm caused by AI deep fakes in elections.