Deepfake Concerns Rise Amidst Elections and Social Issues

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Deepfake technology, a form of AI, is causing significant concern in Southeast Asia, particularly Indonesia, due to its potential for misinformation during elections. The Indonesian government has removed thousands of deepfake accounts and content. Meanwhile, South Korean citizens protest against deepfake pornography, urging government action to protect victims' rights and privacy.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake pornography is generated via AI deep‐learning algorithms and has directly led to sexual and privacy violations of real individuals, including minors. The reported cases (297 incidents, arrests of teen perpetrators) constitute realized harm. Therefore, this is an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Warga Korsel Demo Tuntut Pemerintah Berantas Kasus Porno Deepfake

2024-09-10
CNNindonesia
Why's our monitor labelling this an incident or hazard?
While the article references ongoing harms from AI-generated deepfake porn (an AI incident background), its primary focus is on the public protest and the government’s and platforms’ responses (investigations, apologies, policy enforcement). This constitutes complementary information about societal and governance actions in response to previously reported deepfake abuses, rather than reporting a new incident or forecasting new hazards.
Thumbnail Image

Apa Itu Pornografi Deepfake yang Bikin Geger Korsel?

2024-09-11
CNNindonesia
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated via AI deep‐learning algorithms and has directly led to sexual and privacy violations of real individuals, including minors. The reported cases (297 incidents, arrests of teen perpetrators) constitute realized harm. Therefore, this is an AI Incident.
Thumbnail Image

Kian Meresahkan, Simak Cara Agar Tak Jadi Korban Pornografi Deepfake

2024-09-14
CNNindonesia
Why's our monitor labelling this an incident or hazard?
Deepfake pornography involves AI systems (deep-learning algorithms for image manipulation) whose use has directly harmed individuals’ rights and wellbeing. The article focuses on actual incidents of sexual harm and provides statistics on reported cases and arrests, meeting the criteria for an AI Incident.
Thumbnail Image

Kominfo Klaim Take Down Ribuan Akun dan Konten Deepfake

2024-09-13
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
This piece focuses on regulatory and moderation actions taken in response to existing deepfake content, describing mitigation measures rather than a new incident or the emergence of a novel hazard. It is an update on government and platform efforts to combat AI-driven misinformation, fitting the definition of Complementary Information.
Thumbnail Image

Jelang Pilkada, Masyarakat Diimbau Waspadai Deepfake

2024-09-12
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The piece is centered on advising institutions and the public about the plausible risks and continuing developments in deepfake-based cybercrime ahead of elections, rather than reporting on a specific, fully detailed incident response or remediation. It represents a credible advisory about future or ongoing AI-enabled disinformation and fraud threats, which fits the definition of an AI Hazard.
Thumbnail Image

Korea Selatan Hadapi Krisis Pornografi Deepfake, Sasar Remaja dan Siswa Sekolah

2024-09-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes actual harms caused by the use of AI to generate explicit deepfake content without consent—sexual harassment of children and adults, a violation of rights—leading to reported crimes, investigations, and arrests. This meets the definition of an AI Incident, as the AI system’s use has directly led to harm.
Thumbnail Image

Krisis Pornografi Deepfake di Korea Selatan, Peringatan Dini untuk Indonesia

2024-09-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes realized harm from deepfake AI—non-consensual pornographic content generated and shared via an AI-enabled system (deepfake tools) on Telegram. Victims include students, teachers, and journalists, indicating violations of personal and human rights. This is not hypothetical but an active incident of AI misuse.
Thumbnail Image

Kemenkominfo Takedown Ribuan Konten Deepfake

2024-09-13
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake content that was used to deceive and commit fraud, causing harm to individuals and businesses. The takedown of thousands of such contents indicates the harm is ongoing and significant. The direct link between AI-generated deepfake videos and fraudulent activities causing financial and reputational harm fits the definition of an AI Incident, as the AI system's use directly led to violations of rights and harm to communities and property. The presence of realized harm (fraud, deception, financial loss) confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

Mengenal Teknologi Deepfake, Berikut Manfaat dan Dampak Negatif yang Dapat Ditimbulkan

2024-09-13
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes deepfake as an AI technology capable of generating synthetic media and outlines its potential harms, including misinformation and harmful content creation. However, it does not describe a particular event where these harms have materialized or a specific incident involving AI misuse or malfunction. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It serves as informative background and context about AI risks, which aligns with Complementary Information as it enhances understanding of AI's societal implications without reporting a new incident or hazard.
Thumbnail Image

Konten Palsu Hasil AI Mengancam Dunia Ilmiah, Ilmuwan Ungkap Fakta Ini

2024-09-10
detikjogja
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake technology) that generate manipulated content, which has directly led to harm by undermining scientific integrity and spreading misinformation. The harms include damage to the trustworthiness of scientific research and potential societal harm from misinformation on important issues. These harms fall under harm to communities and violations of fundamental rights to accurate information. The article also references concrete instances of harm, such as thousands of retracted scientific papers due to fake content. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.