AI Misuse Leads to Biometric Data Leaks and Identity Fraud in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

On a Chinese TV show, experts demonstrated how AI can extract fingerprint data from close-range photos and use facial and voice information for deepfake identity fraud. Victims' biometric data was exploited by criminals for impersonation and financial scams, highlighting significant privacy and security risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI being used to illegally capture facial information and perform AI face swapping and voice synthesis for identity forgery, which constitutes a violation of personal rights and privacy. The extraction of fingerprint data from photos also implies misuse of AI-enabled image processing. These harms have occurred or are occurring, thus qualifying as an AI Incident due to violations of rights and privacy harm caused by AI misuse.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityFinancial and insurance services

Affected stakeholders
General public

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

拍照喜欢比"剪刀手"的人注意,你的指纹信息可能会泄露

2026-04-29
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to illegally capture facial information and perform AI face swapping and voice synthesis for identity forgery, which constitutes a violation of personal rights and privacy. The extraction of fingerprint data from photos also implies misuse of AI-enabled image processing. These harms have occurred or are occurring, thus qualifying as an AI Incident due to violations of rights and privacy harm caused by AI misuse.
Thumbnail Image

喜欢比"剪刀手"拍照的人注意了 你的指纹信息可能会泄露

2026-04-30
千龙网
Why's our monitor labelling this an incident or hazard?
While AI systems are involved in the misuse of facial and voice data for identity fraud, the article does not describe a concrete AI Incident where harm has already occurred. Instead, it warns about potential privacy risks and advises caution, which aligns with raising awareness of plausible future harms. Therefore, this is best classified as Complementary Information, providing context and expert advice on AI-related privacy risks without reporting a specific AI Incident or Hazard.
Thumbnail Image

注意!拍照慎用"剪刀手",专家提醒:3米以内拍照可能被提取指纹,发布高清手部照片须打码

2026-04-30
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-based face swapping and voice synthesis used by criminals to impersonate individuals, which is a direct violation of personal rights and privacy, fitting the definition of an AI Incident. The fingerprint extraction from photos at close range also involves AI or advanced algorithms to extract biometric data, leading to privacy harm. Both harms have occurred or are ongoing, not just potential, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小心!距离1.5米拍照比"剪刀手",可泄露指纹信息

2026-04-29
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos that convincingly impersonate a person to request money fraudulently, which is a direct harm to property and personal security. The AI system's use in creating deepfake videos based on leaked biometric data (fingerprints, face, voice) is central to the harm described. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through fraudulent impersonation and potential financial loss.
Thumbnail Image

拍照喜欢比"剪刀手"的人注意了!你的指纹信息可能会泄露;专家提示:1.5米以内拍照,如果镜头完全对着手指,能提取指纹信息_手机网易网

2026-04-29
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled extraction of biometric data (fingerprints) from photos and AI-based face and voice forgery used by malicious actors, which have directly led to privacy violations and identity fraud. These constitute violations of human rights and privacy, fitting the definition of an AI Incident. The harms are realized or ongoing, not just potential, and the AI systems' use is central to these harms.