AI Voice Cloning Used in Silent Call Fraud Scheme in Indonesia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Authorities in Tangerang, Indonesia, warn the public about a new scam where fraudsters use AI to clone victims' voices from silent phone calls. The cloned voices are then used to deceive victims' acquaintances, enabling identity theft and financial fraud. Residents are urged to be vigilant against such AI-driven scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI technology to clone voices from recorded silent phone calls, which is then used to commit fraud against victims' acquaintances. This directly leads to harm to people (financial and emotional harm) through deception, fulfilling the criteria for an AI Incident. The article also provides preventive advice but the core event is the active use of AI in a fraud scheme causing harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Modus Telepon Hening Kuras Rekening, Kenali Ciri Modus Penipuan Baru

2026-04-17
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to clone voices from recorded silent phone calls, which is then used to commit fraud against victims' acquaintances. This directly leads to harm to people (financial and emotional harm) through deception, fulfilling the criteria for an AI Incident. The article also provides preventive advice but the core event is the active use of AI in a fraud scheme causing harm.
Thumbnail Image

Warga diimbau waspadai penipuan melalui pencurian identitas suara

2026-04-18
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology for voice cloning to commit fraud, which directly leads to harm to individuals by enabling identity theft and financial scams. The article explicitly mentions the use of AI for this purpose and provides concrete advice to the public to mitigate harm, indicating that the harm is occurring or has occurred. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Warga diimbau waspadai penipuan lewat pencurian identitas suara - ANTARA News Banten

2026-04-18
Antara News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) in a fraudulent scheme that could lead to harm (identity theft and financial scams). Since the article focuses on warning the public about this threat and advising preventive measures, and does not describe actual realized harm, it fits the definition of an AI Hazard. The AI system's use could plausibly lead to an AI Incident involving harm to persons or communities, but no incident is reported as having occurred yet.
Thumbnail Image

Warga diimbau waspadai penipuan melalui pencurian identitas suara

2026-04-19
Antara News Yogyakarta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI technology used to clone voices from recorded samples obtained via silent phone calls, which is a form of AI system misuse leading to identity theft and fraud. The harm is realized as it enables scams targeting individuals, causing potential financial and psychological harm. The article describes the harm as occurring and warns the public to be cautious, indicating that the AI system's misuse has already caused or is causing harm. Hence, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apa Itu Silent First, Waspada Modus Penipuan Baru Mencuri Identitas Suara

2026-04-17
jabarekspres.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to clone voices from recorded samples obtained by tricking victims into speaking during silent phone calls. The cloned voice is then used to commit fraud, which is a clear harm to individuals (harm to persons) and communities (harm to social trust and security). The article describes the modus operandi as already occurring, not just a potential threat, so it qualifies as an AI Incident. The harm is realized, not just plausible future harm. Hence, the classification is AI Incident.