AI Deepfake Scams Exploit Face-Swapping to Commit Fraud in Taiwan and Hong Kong

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals in Taiwan and Hong Kong have used AI deepfake face-swapping technology to impersonate individuals, deceive facial recognition systems, and commit financial fraud and identity theft. Police have uncovered multiple cases, arrested suspects, and issued public warnings about the risks and detection methods for AI-driven scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (deepfake technology) in a malicious way to commit fraud, causing harm to individuals (financial and psychological harm). This meets the definition of an AI Incident because the AI system's use has directly led to harm through deception and scams. The article also provides information on detection methods, but the primary focus is on the ongoing harm caused by AI deepfake scams.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Financial and insurance servicesDigital securityIT infrastructure and hostingMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI深偽變臉詐騙 刑事局傳授破解之道

2023-08-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) in a malicious way to commit fraud, causing harm to individuals (financial and psychological harm). This meets the definition of an AI Incident because the AI system's use has directly led to harm through deception and scams. The article also provides information on detection methods, but the primary focus is on the ongoing harm caused by AI deepfake scams.
Thumbnail Image

側臉脖子「有異樣」 1秒破解深偽詐騙

2023-08-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake (deepfake face-swapping) technology by criminal groups to impersonate friends, colleagues, or celebrities to scam people, which has led to actual financial harm. The AI system's use in generating realistic fake videos and voices directly contributes to the harm (fraud and financial loss). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people (financial harm and deception).
Thumbnail Image

AI換臉騙貸款 主腦等6人落網 | 聯合新聞網

2023-08-26
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake face-swapping technology to deceive facial recognition systems, enabling fraudulent online financial transactions. This use of AI directly led to realized harm in the form of financial fraud and identity theft. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to persons and property (financial assets).
Thumbnail Image

AI深偽詐騙來臨 刑事局女警拍攝影片變臉給你看 | 時事 | 聯合影音

2023-08-26
聯合影音
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI deepfake technology used to generate realistic fake videos for fraudulent purposes. While no specific harm event is described as having already happened, the article warns that such AI-enabled scams will become more frequent and could plausibly lead to harm such as financial fraud and social disruption. Therefore, this qualifies as an AI Hazard because it describes a credible potential for AI misuse leading to harm, but does not report a realized AI Incident.
Thumbnail Image

眼見不為憑! 刑事局傳授「這一招」 AI變臉詐騙秒破解

2023-08-26
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology used to create fake videos and audio that scammers use to impersonate others and commit fraud, which constitutes harm to people and communities. The harm is realized as scams and misinformation are occurring. The police's advice is a complementary response but does not negate the fact that the AI system's use has caused harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated deepfakes in fraudulent activities causing harm.
Thumbnail Image

歹徒以深偽變臉詐騙 刑事局提供破解方式 | 社會 | 中央社 CNA

2023-08-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake AI technology) used maliciously to impersonate others and commit fraud, which constitutes harm to individuals and communities. The article describes ongoing fraudulent use of AI deepfake technology leading to harm, thus qualifying as an AI Incident. The police's advice and public awareness efforts are responses to this incident but do not change the classification. Therefore, this is an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

眼見不為憑 刑事局一招遠離AI變臉詐騙 | yam News

2023-08-28
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used in scams to impersonate others, causing harm by deceiving victims and potentially leading to financial or emotional damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception). The article also discusses detection and prevention measures, but the primary focus is on the realized harm caused by AI deepfake scams, not just potential harm or general information. Therefore, it qualifies as an AI Incident.
Thumbnail Image

小心AI變臉「假冒親友」視訊 這幾招讓詐騙現形 | yam News

2023-08-29
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology (an AI system) to create realistic fake videos that are used in scams, which directly leads to harm by deceiving victims and potentially causing financial loss and social harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (fraud and deception).
Thumbnail Image

防AI深偽變臉詐騙 警:可要求臉部揮手辨真偽

2023-08-27
公共電視
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology (an AI system) by criminals to impersonate others and commit fraud, which constitutes harm to individuals (harm to persons through deception and financial scams). Since the harm has already occurred (criminal cases have been uncovered), this qualifies as an AI Incident. The article also includes preventive advice, but the primary focus is on the realized harm caused by AI deepfake misuse.
Thumbnail Image

視訊電話別亂接!眼見不一定真 揭詐團「AI變臉視訊」新手法 | 社會 | 三立新聞網 SETN.COM

2023-08-27
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI face-swapping technology by scammers to impersonate others in video calls, leading to fraud and financial harm to victims. This is a direct use of AI causing harm to people, fitting the definition of an AI Incident. The mention of law enforcement developing AI detection tools is complementary but does not change the primary classification. Therefore, the event is an AI Incident due to realized harm caused by malicious AI use.