AI-Generated Police Video Used in Major Fraud, Victim Loses NT$89 Million

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers in Taipei used AI-generated video to convincingly simulate a police station during a video call, deceiving a bank VIP into surrendering account details and losing NT$88.91 million. The AI technology was pivotal in making the fraudulent scheme believable, directly leading to significant financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI image generation technology to simulate a police station and create a convincing video call was pivotal in deceiving the victim, leading directly to significant financial harm. This meets the criteria of an AI Incident because the AI system's use in the scam directly caused harm to the victim's property (financial loss).[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rights

Industries
Financial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

假檢警老哏靠AI模擬「警局很忙」 騙走銀行VIP大戶8891萬 | 社會 | NOWnews今日新聞

2025-06-11
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The use of AI image generation technology to simulate a police station and create a convincing video call was pivotal in deceiving the victim, leading directly to significant financial harm. This meets the criteria of an AI Incident because the AI system's use in the scam directly caused harm to the victim's property (financial loss).
Thumbnail Image

AI人臉辨識 宜縣防不動產詐騙 - 生活 - 自由時報電子報

2025-06-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The AI facial recognition system is explicitly mentioned and is used in the process of identity verification to prevent real estate fraud, which is a form of financial crime harming property owners. However, the article does not report any incident where the AI system caused harm or malfunctioned. Instead, it is part of a government initiative to reduce fraud risks. Therefore, this event does not describe an AI Incident or AI Hazard but rather a deployment of AI technology aimed at harm prevention. It is not merely general AI news but a description of an AI system's use in a real-world context without reported harm or plausible future harm. Hence, it is best classified as Complementary Information.
Thumbnail Image

電信業者善用科技協助打擊詐欺犯罪不遺餘力 | 中央社訊息平台

2025-06-11
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies in fraud detection and prevention by telecom operators, indicating the involvement of AI systems. The use of AI here is in the context of preventing harm (fraud crimes) rather than causing harm. There is no indication that the AI systems malfunctioned or caused any harm; instead, they have successfully blocked millions of fraudulent calls and messages. The article also discusses cross-industry cooperation and international collaboration to enhance these AI-driven measures. Since the article focuses on the use and effectiveness of AI systems in preventing fraud and does not describe any realized or potential harm caused by AI, it fits the definition of Complementary Information, providing context and updates on AI's role in societal harm prevention.
Thumbnail Image

AI合成警局畫面!婦人視訊「假檢警」遭詐9千萬 退休金一夕歸零 | 社會 | 三立新聞網 SETN.COM

2025-06-10
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scammers used AI-generated technology to simulate a police station scene during a video call, which was pivotal in convincing the victim to comply with their demands. This AI involvement directly led to the loss of the victim's life savings, constituting harm to property and financial well-being. The use of AI-generated video content in a fraudulent context causing significant realized harm fits the definition of an AI Incident, as the AI system's use was integral to the scam's success and the resulting financial injury.
Thumbnail Image

打詐有效! 數發部透過 AI 降低名人詐騙至每周不到千件 | 聯合新聞網

2025-06-12
UDN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the active scanning and detection of fraudulent online advertisements impersonating public figures. The use of this AI system directly contributes to reducing the incidence of scams, which are a form of harm to individuals (harm to persons). Since the AI system's use has led to a reduction in realized harm (fraud attempts), this qualifies as an AI Incident where the AI system's use is part of harm prevention and mitigation. The article reports on the system's operational impact on reducing harm rather than just potential or future risk, so it is not a hazard or merely complementary information.
Thumbnail Image

數發部去年9月底建置《網路詐騙通報查詢網》至今已確認12萬詐騙訊息 | 社會 | Newtalk新聞

2025-06-12
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it performs active scanning to detect suspected scam messages. The use of this AI system directly contributes to reducing harm to individuals by identifying and facilitating the removal of scam content, which constitutes harm to communities and individuals through fraud. Since the AI system's use has led to harm mitigation rather than harm occurrence, and the article focuses on the system's positive impact and operational details, this event is best classified as Complementary Information. It provides supporting context on AI's role in combating fraud but does not describe an AI Incident or AI Hazard itself.
Thumbnail Image

詐團開始用AI了!台灣人7天被騙走22.6億 專家揭最大破綻:別上當 | 社會 | 三立新聞網 SETN.COM

2025-06-11
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of AI systems (deepfake technology) to impersonate authorities and commit fraud, which has directly led to significant financial harm to victims. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident as it causes harm to communities and individuals through deception and financial loss.
Thumbnail Image

數發部AI揪詐騙 偽冒名人廣告近期每週低於千則 | 聯合新聞網

2025-06-12
UDN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for active scanning of fraudulent ads impersonating public figures. The use of this AI system has directly led to the identification and removal of scam advertisements, which prevents financial and reputational harm to individuals and the public. This fits the definition of an AI Incident because the AI system's use has directly led to harm prevention related to fraud, which is a form of harm to communities and individuals. The report focuses on the realized impact of the AI system in reducing scam ads, not just potential or future harm, nor is it merely complementary information about AI development or governance.
Thumbnail Image

數發部AI揪詐騙 偽冒名人廣告近期每週低於千則 | 政治 | 中央社 CNA

2025-06-12
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for active scanning of scam advertisements impersonating public figures, which are fraudulent and harmful to individuals and communities. The AI system's outputs have led to the identification and removal of scam ads, which are harmful content. Since the AI system's use is directly linked to the detection and reduction of realized harm from scams, this event meets the criteria for an AI Incident. It is not merely a potential risk (hazard) or a general update (complementary information), but an event where AI use has directly influenced harm mitigation.
Thumbnail Image

警察節慈濟慰問北港警辛勞 結合識詐宣導守護財產安全 | yam News

2025-06-12
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI technology by fraudsters to synthesize voices and images for scams, which is a misuse of AI systems causing harm to individuals' property and financial security. Since these scams are actively occurring and causing harm, this qualifies as an AI Incident under the definition of harm to property and communities due to AI misuse. The event itself is a preventive and educational response but the core issue described is the ongoing AI-enabled fraud causing harm.