AI-Manipulated Images Used to Bypass Facial Recognition in Bank Fraud Scheme in Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A group in Japan used AI-powered apps to create manipulated or 3D images that bypassed facial recognition systems for online banking. This allowed them to fraudulently open bank accounts and secure loans, resulting in financial losses. Police arrested suspects and are investigating the broader criminal network.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to generate a fake facial image that was used to deceive a bank's identity verification process, resulting in fraudulent account opening. This constitutes direct harm through fraud and violation of legal protections. Therefore, it meets the criteria of an AI Incident because the AI system's use directly led to harm (fraud and legal violations).[AI generated]
AI principles
AccountabilityRobustness & digital security

Industries
Financial and insurance services

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

メガバンクの本人確認、偽写真ですり抜け 生成AI悪用のトクリュウか

2026-03-04
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake facial image that was used to deceive a bank's identity verification process, resulting in fraudulent account opening. This constitutes direct harm through fraud and violation of legal protections. Therefore, it meets the criteria of an AI Incident because the AI system's use directly led to harm (fraud and legal violations).
Thumbnail Image

成り済ましで顔認証突破か 口座不正開設疑い、男逮捕

2026-03-04
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event describes the use of manipulated images to bypass facial recognition, which is an AI system used for identity verification. The fraudulent use of AI-generated or AI-manipulated images directly led to illegal bank account openings and financial fraud, causing harm to property and violating legal protections. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse in facial recognition.
Thumbnail Image

成り済ましで顔認証突破か 口座不正開設疑い、男逮捕:東京新聞デジタル

2026-03-04
東京新聞 TOKYO Web
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of manipulated facial images to bypass an AI facial recognition system used for identity verification in online banking. The AI system's malfunction or circumvention directly led to financial fraud and identity theft, which constitute harm to property and violation of legal protections. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in causing harm through fraudulent use.
Thumbnail Image

成り済ましで顔認証突破か 口座不正開設疑い、男逮捕:山陽新聞デジタル|さんデジ

2026-03-04
山陽新聞デジタル|さんデジ
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of manipulated facial images to defeat an AI-based facial recognition system used for identity verification in online banking. This manipulation directly led to fraudulent bank account openings and financial crimes, constituting realized harm. The AI system's malfunction or circumvention is central to the incident, fulfilling the criteria for an AI Incident due to direct harm (financial crime and identity fraud) caused by the AI system's use and its exploitation.
Thumbnail Image

成り済ましで顔認証突破か|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-03-04
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article describes a case where a suspect used fake images to fraudulently pass facial recognition identity checks, enabling illegal financial activities. Facial recognition is an AI system, and its misuse here directly caused harm (fraud, violation of rights). Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through misuse.
Thumbnail Image

アプリで画像を立体化して生体認証突破か 不正に口座開設などの疑い 男を逮捕(2026年3月5日掲載)|日テレNEWS NNN

2026-03-05
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The use of an app that converts images into 3D to defeat biometric authentication constitutes the use of an AI system or AI-enabled technology in committing fraud. This use directly led to harm, specifically financial harm through theft and violation of property rights. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through illegal activity.