AI Face-Swapping Used to Bypass Facial Recognition and Commit Financial Fraud in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A man in Nanjing illegally purchased 1.95 million citizens' personal data and used AI face-swapping software to bypass facial recognition on a financial platform, accessing 23 victims' accounts and stealing 15,996 yuan. He was sentenced to 4.5 years in prison, highlighting security risks in AI-driven authentication systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (AI face-swapping software) used to defeat facial recognition security, leading to unauthorized access to victims' financial accounts and financial losses. This constitutes direct harm to persons and violations of their rights. The case was prosecuted and resulted in criminal penalties, confirming the harm occurred. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident. The article also discusses responses and improvements, but the primary focus is on the realized harm caused by the AI misuse.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsAccountability

Industries
Financial and insurance servicesDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationRecognition/object detection

In other databases

Articles about this incident or hazard

Thumbnail Image

"AI換臉"可以繞過人臉識別防線?

2025-07-20
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AI face-swapping software) used to defeat facial recognition security, leading to unauthorized access to victims' financial accounts and financial losses. This constitutes direct harm to persons and violations of their rights. The case was prosecuted and resulted in criminal penalties, confirming the harm occurred. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident. The article also discusses responses and improvements, but the primary focus is on the realized harm caused by the AI misuse.
Thumbnail Image

男子买195万条个人信息盗刷1万多元 男子AI换脸盗刷他人账户1万多获刑

2025-07-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI face-swapping software to impersonate victims and access their financial accounts, leading to theft and privacy violations. This constitutes direct harm to individuals' property and rights. The AI system's use in this criminal activity directly caused the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

男子用"AI换脸"登录23人账户,盗刷银行卡获刑4年半!

2025-07-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping software) in the commission of fraud, directly causing harm to individuals through unauthorized access and theft from their financial accounts. This constitutes a violation of personal rights and financial harm, fitting the definition of an AI Incident. The AI system's use was central to the harm, as it enabled bypassing facial recognition security measures. Therefore, this is classified as an AI Incident.
Thumbnail Image

男子用"AI换脸"登录23人账户,盗刷银行卡获刑4年半!

2025-07-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The use of AI face-swapping software to impersonate victims and bypass facial recognition authentication directly led to unauthorized access and theft from financial accounts, constituting realized harm to individuals' property and rights. This meets the definition of an AI Incident because the AI system's use directly caused harm (financial theft and violation of personal data rights). The article also mentions legal and remediation responses, but the primary focus is on the incident itself.
Thumbnail Image

男子用"AI换脸"登录23人账户,盗刷银行卡获刑4年半

2025-07-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping software) to commit fraud by impersonating victims through facial recognition systems, leading to direct financial harm and violation of personal information rights. The AI system's use was pivotal in enabling unauthorized access and theft. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in criminal activity.
Thumbnail Image

男子AI换脸盗刷别人账户获刑四年半

2025-07-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AI face-swapping software) used to circumvent biometric security measures, resulting in unauthorized account access and fraudulent financial transactions. This caused direct harm to individuals' property and violated their personal information rights. The involvement of AI in the commission of credit card fraud and identity theft meets the criteria for an AI Incident, as the AI system's use directly led to harm (financial loss and privacy violations).
Thumbnail Image

AI换脸"可以绕过人脸识别防线?

2025-07-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI face-swapping software to impersonate victims and bypass facial recognition authentication, resulting in unauthorized access to financial accounts and theft. This constitutes direct harm to individuals' property and violation of their rights. The AI system's malfunction or exploitation was pivotal in enabling the fraud. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (financial theft and privacy breaches).
Thumbnail Image

男子买195万多条个人信息,AI换脸登录23人账户盗刷银行卡,被判刑四年六个月

2025-07-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping software) to commit fraud by impersonating individuals through biometric authentication methods (face recognition). The AI system's use directly led to harm, specifically financial theft and violation of personal data rights. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to individuals' property and violated their rights.
Thumbnail Image

央视曝光新型 AI 换脸诈骗案,23 人账户银行卡被盗刷

2025-07-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The use of AI face-swapping software to impersonate victims and bypass facial recognition authentication directly caused financial theft and harm to the victims. This constitutes an AI Incident because the AI system's use led directly to violations of property rights and financial harm. The event involves the use and misuse of an AI system, resulting in realized harm to people (financial loss).
Thumbnail Image

央视曝光新型 AI 换脸诈骗案,23 人账户银行卡被盗刷_方式_公民_符某

2025-07-21
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
The use of AI face-swapping software to impersonate victims and bypass facial recognition authentication directly caused financial theft and harm to the victims. This constitutes a violation of rights and harm to persons through the malicious use of an AI system. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm (financial fraud and theft).
Thumbnail Image

男子网购95万多条公民信息进行AI换脸

2025-07-20
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping software) to commit fraud by bypassing facial recognition security on financial platforms. The AI system's use directly led to harm, specifically financial theft and violation of individuals' rights to privacy and property. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to persons and property through fraudulent activities.
Thumbnail Image

内蒙古加强人脸识别技术使用管理

2025-07-22
光明网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) and addresses problems related to its misuse and potential harms to individuals' privacy and rights. However, the article describes a regulatory response and preventive action rather than a specific incident where harm has already occurred. There is no direct report of realized harm or injury, but rather a focus on mitigating risks and enforcing compliance to prevent violations. Therefore, this event is best classified as Complementary Information, as it provides important context on governance and societal responses to AI-related risks without describing a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

景区强制刷脸"盯"上VIP会员:办年卡入园,先交出你的脸

2025-07-21
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used for identity verification. The use is mandatory for VIP annual card holders, with refusal leading to denial of service, constituting coercion. The collection and processing of sensitive biometric data without proper informed consent and transparency breaches legal protections under personal information and biometric data laws. This misuse directly harms individuals' rights to privacy and data protection, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The event also highlights systemic issues and regulatory responses, but the primary focus is on the realized harm from forced AI use and data mishandling, not just potential or complementary information.