AI-Generated Deepfake Videos Used for Large-Scale Identity Fraud in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Qingdao, China, police dismantled a criminal network that used AI to create over 50,000 dynamic deepfake face videos, enabling fraudsters to bypass facial recognition for fake account registrations and scams. The operation involved illegal trading of personal data and resulted in significant financial and privacy harm to individuals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI systems to generate dynamic face videos that deceive facial recognition systems, facilitating illegal activities such as identity fraud and scams. This use of AI directly caused harm by enabling criminal fraud and violations of personal data rights. Therefore, it meets the definition of an AI Incident because the AI system's use directly led to significant harm to individuals and communities through fraud and privacy violations.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

一组完整个人信息20 -- 30元?!阻击"AI换脸"黑灰产

2026-03-17
China News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate dynamic face videos that deceive facial recognition systems, facilitating illegal activities such as identity fraud and scams. This use of AI directly caused harm by enabling criminal fraud and violations of personal data rights. Therefore, it meets the definition of an AI Incident because the AI system's use directly led to significant harm to individuals and communities through fraud and privacy violations.
Thumbnail Image

警方查获5万多条合成动态人脸视频 谁在"复制"你的脸?

2026-03-16
华龙网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate synthetic dynamic face videos that deceive facial recognition systems, which is a clear AI system involvement. The use of these AI-generated videos directly led to harm by enabling fraudulent account creation and subsequent scams, violating individuals' rights and causing financial and privacy harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm (fraud, privacy violations, and harm to property).
Thumbnail Image

5万多条合成动态人脸视频被查获 一条标价20元

2026-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to create dynamic synthetic face videos that deceive facial recognition systems, facilitating fraudulent activities and scams. This directly caused harm to individuals through identity theft and financial fraud, fulfilling the criteria for an AI Incident. The involvement of AI in the synthesis of videos that enabled the crime is central to the harm described.
Thumbnail Image

5万多条合成动态人脸被查获 电诈黑灰产链条揭秘

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to synthesize dynamic face videos that bypass facial recognition, enabling fraud and identity theft. This directly leads to violations of personal rights and facilitates criminal scams, which are harms under the AI Incident definition. The involvement of AI in the creation of these synthetic videos and their use in illegal activities confirms the presence of an AI system causing direct harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

5万多条合成动态人脸视频被查获 电诈黑灰产链条曝光

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to synthesize dynamic face videos that deceive facial recognition systems, enabling fraudulent activities. This use of AI directly led to harm by facilitating scams and identity theft, which are violations of personal rights and cause financial and reputational damage to victims. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing realized harm through criminal misuse.
Thumbnail Image

人脸识别防线,怎能被"一张照片+AI"攻破?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated synthetic facial videos to commit fraud, which directly led to harm to individuals' financial assets and privacy rights. The AI system's use in generating fake biometric data to bypass security is a clear example of AI misuse causing violations of rights and harm to property. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to realized harm.
Thumbnail Image

一组完整个人信息20 -- 30元?!阻击"AI换脸"黑灰产

2026-03-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to synthesize dynamic face videos that can deceive facial recognition systems, enabling fraudulent real-name authentication. This AI-enabled fraud leads to violations of personal data rights and facilitates scams, which are harms to individuals and communities. The AI system's use is a direct contributing factor to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

警方侦破"换脸案"!不法人员买卖公民证件照,注册具有支付功能的账号,然后高价出售

2026-03-17
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to synthesize dynamic face videos from stolen photos, which are then used to circumvent identity verification systems on social platforms. This AI-enabled forgery facilitates illegal account creation that leads to financial fraud and privacy violations, harming individuals and communities. The AI system's role is pivotal in enabling these harms, meeting the criteria for an AI Incident as the AI use directly leads to realized harm (privacy breaches, fraud, and illegal activities).