AI Deepfake Fraud and Military Facial Recognition Payment Risks in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfake technology has enabled criminals in China to bypass facial recognition systems, leading to significant financial fraud and theft. In response to security risks, a Chinese military unit halted the use of facial recognition payment systems to prevent potential leaks of sensitive information and exposure of military operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI deepfake systems to create fake facial videos that have been used to successfully commit financial fraud, leading to theft of funds from victims' accounts. This constitutes direct harm to property and financial security (harm category d). The AI system's use in these crimes is central to the incidents described, fulfilling the criteria for an AI Incident. The article also references law enforcement actions and court rulings confirming these harms have occurred. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyAccountabilityRespect of human rightsTransparency & explainability

Industries
Financial and insurance servicesGovernment, security, and defenceDigital security

Affected stakeholders
ConsumersGovernment

Harm types
Economic/PropertyHuman or fundamental rightsPublic interestReputational

Severity
AI incident

Business function:
Citizen/customer serviceICT management and information security

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

叫停!军人刷脸支付极有可能造成失泄密,甚至暴露部队行动

2022-10-30
81.cn
Why's our monitor labelling this an incident or hazard?
The facial recognition payment system is an AI system that processes biometric data to authenticate users. Its use in a military environment poses a credible risk of leaking sensitive information, such as soldiers' identities and operational details visible in the biometric scans. The article reports that the military has proactively stopped using this AI system to prevent such risks, indicating recognition of plausible future harm. Since no actual harm has occurred yet but the risk is significant and credible, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a concrete preventive measure against a plausible AI-related security risk.
Thumbnail Image

军人刷脸支付,叫停

2022-10-31
军事-人民网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition payment) whose use posed a plausible risk of harm to military security and confidentiality. Although no direct harm occurred, the risk of sensitive information leakage through AI-enabled facial recognition was significant enough to warrant stopping the system's use. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of security and confidentiality (a form of harm to communities and possibly human rights). The article focuses on the preventive action taken to avoid harm rather than describing an actual incident of harm.
Thumbnail Image

人脸识别这么好糊弄,我的钱会不会被盗刷?-36氪

2022-10-27
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake systems to create fake facial videos that have been used to successfully commit financial fraud, leading to theft of funds from victims' accounts. This constitutes direct harm to property and financial security (harm category d). The AI system's use in these crimes is central to the incidents described, fulfilling the criteria for an AI Incident. The article also references law enforcement actions and court rulings confirming these harms have occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

开美颜网红的噩梦?腾讯新专利公布

2022-10-27
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system designed to detect synthetic mouth movements in images or videos, which is a form of AI system. However, the article only reports the patent publication and the intended use of the system to reduce misinformation and fraud. There is no indication that the system has caused or directly led to any harm, nor that harm has occurred or is imminent. Instead, this is a development that could help mitigate AI-related harms in the future. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI system development aimed at addressing AI-related harms, without reporting an actual incident or hazard.
Thumbnail Image

人脸识别这么好糊弄,我的钱会不会被盗刷?

2022-10-27
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deep learning-based deepfake technology and AI facial recognition systems) being used maliciously to impersonate individuals and bypass security measures, resulting in direct financial theft and fraud. These are clear harms to property and personal security, fulfilling the criteria for an AI Incident. The article also references actual arrests and court rulings, confirming that these harms have materialized. Hence, this is not a hypothetical risk or complementary information but a concrete AI Incident involving AI misuse causing harm.
Thumbnail Image

腾讯新专利可识别合成嘴型人脸图像,降低诈骗率

2022-10-27
和讯网
Why's our monitor labelling this an incident or hazard?
The patent involves an AI system that processes facial and audio data to identify synthetic mouth movements, which are often used in deepfake videos for scams or misinformation. The technology aims to reduce the occurrence of fraud and rumor-spreading by detecting such synthetic content. Although the patent itself is a development and not an incident of harm, it addresses a significant AI-related harm (fraud, misinformation) and its mitigation. Since no actual harm or incident is reported but the technology is intended to prevent harm, this qualifies as Complementary Information about AI system development and societal response to AI-related harms.