AI-Generated Deepfakes Fuel Scams, Extortion, and Misinformation in Indonesia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfake technology is increasingly used in Indonesia for financial fraud, extortion via pornographic content, political manipulation, and spreading misinformation. Demand for deepfake services on darknet forums is high, leading to significant harm to individuals’ finances, privacy, and reputations, and threatening democratic processes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (deepfake generation technology) being used maliciously to create harmful content that directly causes harm to individuals' privacy, reputation, and finances. The article reports that these harms are occurring, with active demand and supply of such services on the darknet. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harms including financial loss and violations of privacy and rights. The involvement is through malicious use of AI-generated deepfakes, causing realized harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
General public

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Motif Balas Dendam dan Cuan, Bikin Industri Deepfake Marak di Darknet

2023-05-14
suara.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake generation technology) being used maliciously to create harmful content that directly causes harm to individuals' privacy, reputation, and finances. The article reports that these harms are occurring, with active demand and supply of such services on the darknet. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harms including financial loss and violations of privacy and rights. The involvement is through malicious use of AI-generated deepfakes, causing realized harm.
Thumbnail Image

Satu Menit Video Deepfake Bisa Dijual Rp 297 Juta di Darknet

2023-05-15
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake technology) being used maliciously to create manipulated videos that have directly caused harm, including financial fraud and emotional distress. The harms are realized and significant, involving violations of privacy, financial losses, and potential political manipulation. The involvement of AI in the development and use of these deepfakes is clear and central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Jasa Pembuatan Video Deepfake Terbongkar, Tarif Mulai Rp4,4 Juta

2023-05-12
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake video generation) and their use has directly led to harms such as financial fraud, emotional harm from revenge porn, and potential political manipulation. The presence of realized harms (e.g., crypto scams, extortion via pornographic deepfakes) meets the criteria for an AI Incident. The article does not merely warn of potential harm but reports ongoing exploitation causing actual damage, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

Kaspersky Menjelajahi Industri Deepfake di Darknet, Hasilnya Mengejutkan

2023-05-14
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake generation technology) being used maliciously on darknet platforms to produce manipulated videos and images for harmful purposes, including crypto asset fraud and security breaches. These activities have directly led to realized harms such as financial scams and potential political manipulation. The presence of AI systems is clear, their use is malicious, and harm is occurring. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Makin Ngeri, Permintaan Jasa Deepfake Meningkat

2023-05-12
beritasatu.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that create realistic manipulated content. The article details how these AI-generated deepfakes are actively used in scams causing financial losses and in creating pornographic content for extortion, which constitutes direct harm to individuals and communities. Therefore, this event qualifies as an AI Incident due to realized harms caused by the use of AI systems.
Thumbnail Image

Ancaman Teknologi Deepfake: Penyebaran Hoaks terhadap Masyarakat | Retizen

2023-05-12
retizen.id
Why's our monitor labelling this an incident or hazard?
The article clearly identifies deepfake technology as an AI system and describes its use in spreading false information and hoaxes that harm communities and democratic processes. These harms are occurring or ongoing, as the article references real societal impacts and the need for mitigation. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to communities and violations of rights through misinformation and manipulation. The article does not merely warn of potential harm but discusses actual harms and responses, so it is not an AI Hazard or Complementary Information. It is not unrelated because it centers on AI deepfake technology and its societal impacts.