AI Misuse and Fraud Prevention in China's Financial and Social Platforms

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In China, AI technologies have been misused for deepfake scams, including impersonating analysts and bypassing biometric authentication, causing financial losses. Conversely, platforms like MiLian Technology and Yiren Zhike deploy AI-driven risk control systems to prevent fraud, significantly reducing scam cases and protecting users' property and rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-based data modeling and an AI intelligent pre-warning platform that analyzes data to identify potential victims of fraud and automatically blocks malicious network traffic. The AI system's use has directly led to a significant decrease in telecom fraud cases and has protected critical infrastructure from cyberattacks, which constitutes harm prevention and protection of property and communities. Since the AI system's use has directly led to realized harm reduction and protection, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on concrete outcomes from AI deployment.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Financial and insurance servicesMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

四川雅安:"雅州云警"筑牢川西数据安全屏障

2026-03-20
China News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based data modeling and an AI intelligent pre-warning platform that analyzes data to identify potential victims of fraud and automatically blocks malicious network traffic. The AI system's use has directly led to a significant decrease in telecom fraud cases and has protected critical infrastructure from cyberattacks, which constitutes harm prevention and protection of property and communities. Since the AI system's use has directly led to realized harm reduction and protection, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on concrete outcomes from AI deployment.
Thumbnail Image

散户不想被消灭,就加入"李迅雷机构投资团"?卖方大佬紧急发声

2026-03-20
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create fake videos of a known economist, which are then used to induce investors into scams involving large financial losses. The AI system's use in generating deceptive content directly leads to harm (financial loss) to individuals, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the scam's credibility and success. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

今年以来 多名知名首席分析师接连成为诈骗团伙敛财的"引流"工具

2026-03-20
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to manipulate videos of known analysts to deceive investors, leading to actual financial losses. This constitutes direct harm caused by the malicious use of AI systems (deepfake or synthetic media generation). Therefore, this event qualifies as an AI Incident due to realized harm resulting from AI misuse.
Thumbnail Image

米连科技探索智能风控创新路径,精准打击杀猪盘,守护用户财产安全

2026-03-20
hea.china.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of AI-powered risk control systems that have led to a significant reduction (80%) in scam cases on the platform, indicating realized harm prevention and mitigation. The AI system's role is pivotal in identifying and blocking fraudulent activities that would otherwise cause financial harm to users. This fits the definition of an AI Incident because the AI system's use directly leads to preventing injury to users' property and harm to communities by combating scams. The article does not merely discuss potential risks or future hazards but reports on actual AI system use resulting in harm reduction, thus qualifying as an AI Incident.
Thumbnail Image

28部门联合发文!上海老有所为行动指南

2026-03-21
hot.online.sh.cn
Why's our monitor labelling this an incident or hazard?
The AI system mentioned is involved in providing anti-fraud guidance through an interactive interface using AI and big data models. There is no indication that the AI system malfunctioned or caused harm; rather, it is part of a preventative service. The article focuses on the deployment and features of this AI system as part of a broader social support framework for the elderly, without reporting any incident or hazard related to AI misuse or failure. Therefore, this is best classified as Complementary Information, as it provides context and details about AI-enabled services supporting societal goals without describing an AI Incident or AI Hazard.
Thumbnail Image

2026-03-20
证券之星
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in the use phase, actively monitoring and preventing fraud, which directly protects individuals' financial property and rights. The event reports realized harm prevention and ongoing protection of financial consumers through AI technology, which aligns with harm to property and protection of consumer rights. Since the AI system's use has directly contributed to preventing fraud and protecting residents, this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about AI development or potential risks but about actual deployment and impact in preventing harm.
Thumbnail Image

AI换脸网络盗刷行为的认定

2026-03-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (face-swapping software) used to bypass biometric authentication, which directly enabled unauthorized access and financial fraud. The harm is realized (financial loss to the victim), and the AI system's misuse is pivotal in causing this harm. This fits the definition of an AI Incident because the AI system's use directly led to a violation of property rights and financial harm. The detailed legal analysis further confirms the nature of the harm and the role of AI in the incident.