Allo Bank Partners with ADVANCE.AI to Combat Deepfake Fraud in Banking

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Allo Bank has partnered with ADVANCE.AI to strengthen digital security against deepfake-based fraud, which has caused significant financial losses in Indonesia's banking sector. The collaboration deploys AI-driven biometric verification and fraud detection to counter deepfake attacks targeting digital onboarding and eKYC processes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use and misuse of AI systems (deepfake technology) that have directly caused financial harm to the banking sector, fulfilling the criteria for an AI Incident. The AI system's outputs (synthetic faces, voices, documents) are used maliciously to commit fraud, leading to significant monetary losses. This constitutes harm to property and financial assets, which is a recognized harm category. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyAccountabilityRespect of human rightsTransparency & explainability

Industries
Financial and insurance servicesDigital security

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Sektor Keuangan Indonesia Rugi Rp 700 Miliar Lebih Gegara Deepfake

2025-07-17
detiki net
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake technology) that have directly caused financial harm to the banking sector, fulfilling the criteria for an AI Incident. The AI system's outputs (synthetic faces, voices, documents) are used maliciously to commit fraud, leading to significant monetary losses. This constitutes harm to property and financial assets, which is a recognized harm category. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waspada, Jenis-jenis Deepfake Ini Mengincar Rekeningmu

2025-07-17
detiki net
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake technology including voice cloning and video manipulation) being used to commit fraud that results in financial loss and identity theft. The harms are realized and ongoing, affecting individuals and the banking community. The AI systems' use is central to the incident, as the fraud depends on the AI-generated fake voices and videos to deceive victims. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and property (financial loss).
Thumbnail Image

Marak Modus Penipuan Deepfake, Allo Bank (BBHI) Gandeng ADVANCE.AI

2025-07-17
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: deepfake technology (AI-generated synthetic media) used maliciously to commit fraud, causing direct financial harm to banks and customers. The article documents realized harm (financial losses) and the use of AI-based detection systems as a response. The fraud is ongoing and has caused significant damage, meeting the criteria for an AI Incident. The AI system's use in fraud is a direct cause of harm, and the article does not merely discuss potential or future risks but actual losses and attacks. Hence, it is not a hazard or complementary information but an incident.
Thumbnail Image

Allo Bank Gandeng ADVANCE.AI Tangkal Ancaman Deepfake di Sektor Perbankan

2025-07-17
Tempo Media
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (biometric verification with face comparison, liveness detection, behavior analysis) to counteract deepfake threats, which are a form of AI-generated digital fraud. However, the article describes the deployment of these AI systems as a preventive security measure rather than reporting an actual harm or incident caused by AI malfunction or misuse. There is no indication that an AI system has caused harm or that a harm event has occurred due to AI. Instead, the article focuses on the development and use of AI to prevent potential harms from deepfake attacks. Therefore, this is best classified as Complementary Information, providing context on societal and technical responses to AI-related threats in banking.
Thumbnail Image

Allo Bank menggandeng ADVANCE.AI antisipasi serangan deepfake

2025-07-17
Antara News Mataram
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technology) causing real harm (financial losses due to fraud) in the banking sector, which qualifies as an AI Incident. However, the article's main focus is on the partnership and the deployment of AI-based security measures to counteract these harms, representing a governance and technical response. There is no description of a new or specific AI Incident event occurring within this report, nor is it solely about potential future harm. Hence, it fits the definition of Complementary Information, providing updates on responses to an existing AI Incident context.
Thumbnail Image

Allo Bank gandeng ADVANCE.AI guna antisipasi serangan deepfake

2025-07-17
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake technology used maliciously for fraud and AI-based detection systems for security. However, it does not describe an actual AI Incident where harm has occurred due to AI system malfunction or misuse. Instead, it highlights the plausible risk of harm from deepfake technology and the mitigation strategies being implemented. Therefore, it fits the definition of Complementary Information as it provides context on AI-related threats and responses in the banking sector without reporting a new incident or hazard.
Thumbnail Image

Kolaborasi Allo Bank dan ADVANCE.AI Perkuat Keamanan Bank Digital dari Ancaman Deepfake

2025-07-17
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (deepfake generation and detection AI) directly linked to financial fraud causing significant monetary harm to individuals and institutions. The harm is realized (financial losses due to deepfake fraud), and the AI system's role is pivotal both in the perpetration of fraud and in the mitigation efforts. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial losses) and the event focuses on addressing this harm.
Thumbnail Image

Bahaya Deepfake bagi Perbankan, Ini Pihak yang Pasti Terdampak

2025-07-17
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that manipulates visual and audio data to create realistic but fake content. The harms described—identity theft, financial fraud, reputational damage, and misinformation—are direct consequences of the use and misuse of this AI system. Since these harms are occurring and affecting individuals, banks, and communities, this qualifies as an AI Incident. The mention of detection models is a complementary detail but does not negate the presence of realized harm.
Thumbnail Image

Allo Bank Gandeng Advance.ai Guna Antisipasi Serangan Deepfake - Beritaja

2025-07-17
Beritaja.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology, which uses AI to create realistic fake videos and audio, is explicitly mentioned as causing significant financial fraud harm in the banking sector, fulfilling the criteria for harm to property and communities. The article states that these harms have already occurred, with losses exceeding Rp700 billion. The partnership with ADVANCE.AI involves AI systems for biometric verification and fraud detection to mitigate these harms. Therefore, the event involves AI system use leading directly or indirectly to harm (fraud losses) and responses to it. This fits the definition of an AI Incident, as the AI system's development and use have directly led to harm, and the article centers on this harm and mitigation efforts. It is not merely a hazard or complementary information, as the harm is realized and the AI system's role is pivotal.