AI-Driven Voice Cloning and Deepfake Scams Cause Major Financial Losses in Indonesia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Indonesia, financial fraud using AI technologies such as voice cloning and deepfake videos has surged, enabling scammers to convincingly impersonate victims' acquaintances. Authorities report over 343,000 scam cases and losses totaling Rp 7.8 trillion, with only a small fraction of stolen funds recovered.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI in fraudulent activities causing realized financial harm to victims, with large sums lost and many reports received. This fits the definition of an AI Incident, as the AI systems' use has directly led to harm (financial injury to persons).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Financial and insurance servicesDigital security

Affected stakeholders
General public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

OJK: Penipuan di Sektor Keuangan Pakai AI Makin Marak

2025-11-16
Palopo Pos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in fraudulent activities causing realized financial harm to victims, with large sums lost and many reports received. This fits the definition of an AI Incident, as the AI systems' use has directly led to harm (financial injury to persons).
Thumbnail Image

Begini 3 Cara Cegah Penipuan Pakai AI

2025-11-15
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (voice cloning and deepfake generation) and their potential misuse for fraudulent purposes. Since no actual harm or incident is described, but the article warns about plausible risks of harm from AI misuse, this fits the definition of an AI Hazard. It is not Complementary Information because it does not provide updates or responses to a past incident, nor is it unrelated as it directly concerns AI misuse risks.
Thumbnail Image

Waspada! Penipuan Pakai Suara dan Wajah Palsu dari AI

2025-11-16
detik Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice cloning and deepfake generation) that could plausibly lead to harm (financial fraud and deception). Since no actual harm or incident is reported as having occurred yet, but the risk is credible and the warning is about potential misuse, this qualifies as an AI Hazard. The article's main focus is on the plausible future harm from AI misuse rather than a realized incident or a response to a past incident.
Thumbnail Image

Ngeri! Modus Penipuan Makin Banyak, Bisa Ubah Suara & Wajah Pakai AI

2025-11-17
detik Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (voice cloning and deepfake generation) being used to perpetrate fraud, causing direct harm to victims by deception and financial loss. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons (victims of scams). The harm is realized, not just potential, and the AI system's role is pivotal in enabling the deception. Hence, the classification is AI Incident.
Thumbnail Image

Ini Modus Penipuan AI yang Lagi Ganas, Awas Rekening Ludes!

2025-11-16
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning and deepfake technologies) in active fraudulent schemes that have already caused harm to victims by stealing money and personal information. The harm is direct and realized, fitting the definition of an AI Incident. The article warns about ongoing scams using AI-generated content, indicating that the AI system's use has directly led to harm (financial loss and deception). Therefore, this is classified as an AI Incident.
Thumbnail Image

OJK Ingatkan Penipuan Keuangan Pakai AI Makin Marak, Deefake hingga Suara Tiruan

2025-11-15
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning and deepfake generation) in the commission of financial fraud, which has directly led to significant harm to individuals and communities in the form of financial losses. The AI systems are used maliciously to create convincing fake identities, facilitating scams that have caused real damage. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm (financial loss) to people.
Thumbnail Image

OJK Ungkap 2 Modus Penipuan Pakai AI, Kerugian Capai Rp 7,8 Triliun - Teknologi Katadata.co.id

2025-11-17
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake audio and video to commit fraud, causing direct financial harm to victims. The harm is realized and significant, with documented losses and numerous reports. The AI's role is pivotal in enabling the impersonation and deception. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Awas! Penipuan AI Makin Canggih, Suara dan Wajah Bisa Dipalsukan

2025-11-15
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (voice cloning and deepfake technologies) being used maliciously to impersonate people and deceive victims, resulting in realized harm through fraud. This fits the definition of an AI Incident because the AI system's use directly leads to harm (financial and psychological) to people. The harm is not hypothetical but ongoing and actual, as victims are being manipulated and defrauded. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

Satgas PASTI Ingatkan Modus Penipuan Berbasis AI, Masyarakat Diminta Waspada

2025-11-17
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for voice cloning and deepfake generation, which are AI technologies. The warning is about the potential misuse of these AI systems to commit fraud, which could plausibly lead to harm such as financial loss and violation of privacy. Since the article focuses on the risk and advisories to the public rather than describing a realized harm event, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Waspada, Penipuan Online Modus Baru Makin Canggih, Usai Suara, Kini Muncul Wajah Palsu?

2025-11-18
siap.viva.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies (voice cloning and deepfake) being used by criminals to impersonate others and commit fraud, which directly harms victims by causing financial and personal information loss. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (fraud victims).
Thumbnail Image

OJK Kaltimtara Ingatkan Penipuan Berbasis AI, Modus Deepfake dan Voice Cloning Mengkhawatirkan - Tribunkaltim.co

2025-12-01
Tribun Kaltim
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning and deepfake) in active fraudulent schemes that have already caused harm to individuals (financial losses). The AI systems' use directly leads to violations of rights and harm to communities through deception and financial fraud. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm. The article also mentions ongoing enforcement actions against illegal financial activities, but the primary focus is on the AI-enabled fraud causing harm.
Thumbnail Image

OJK Jabar Peringatkan Maraknya Penipuan Berbasis AI, Banyak Korban Terjerat

2025-12-05
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning and deepfake technologies) in fraudulent activities that have directly led to harm to people (financial and personal data losses). The article describes realized harm caused by AI-enabled impersonation scams, fitting the definition of an AI Incident due to violations of rights and harm to individuals. Therefore, this is classified as an AI Incident.
Thumbnail Image

OJK Jabar Warning Soal Penipuan Berbasis AI, dari Deepfake hingga Voice Cloning

2025-12-05
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake and voice cloning) in fraudulent activities that have directly led to financial harm to people and communities. The AI systems' use in impersonation and scam operations constitutes a violation of rights and causes harm to communities. Since the harm is occurring and the AI systems are central to the fraudulent modus operandi, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Satgas PASTI Daerah Jawa Barat Imbau Masyarakat Waspadai Modus Penipuan Menggunakan Artificial Intelligence

2025-12-04
jabarekspres.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for voice cloning and deepfake video generation to commit fraud, which has caused actual harm to victims through financial loss and deception. The task force's warnings and blocking of illegal entities indicate that these AI-enabled scams are ongoing and have materialized harm. Therefore, this is an AI Incident due to the direct involvement of AI in causing violations of rights and harm to communities through fraudulent activities.
Thumbnail Image

Satgas PASTI Jabar: Waspadai Penipuan Gunakan Modus AI - Suara Merdeka Jakarta

2025-12-04
Dana APBD Rp 121 Miliar Jaga Operasional BRT Metro Jabar Trans - Suara Merdeka Jakarta
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning and deepfake generation) in active fraud schemes that have caused harm to people by tricking them into financial losses. The article describes actual incidents of harm resulting from AI-enabled impersonation and fraud, as well as enforcement actions against illegal entities exploiting these technologies. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to harm to individuals (financial harm and deception).