Tainan Pilots AI Facial Recognition at ATMs to Deter Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tainan City is launching a pilot program using AI facial recognition at ATMs to deter fraud by requiring users to remove masks and helmets before cash withdrawal. The system, adapted from COVID-19 tech, triggers warnings if a full face isn’t detected, though legal concerns about its enforcement remain under discussion.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved, as the system uses AI facial recognition technology to identify potential fraudsters at ATMs. The AI system's use is intended to prevent harm (financial fraud) by stopping fraudulent withdrawals. Although no harm is reported as having occurred, the system's deployment is a direct intervention to prevent harm. Since the article describes the system's planned trial and intended use to prevent fraud, and no actual harm or incident has yet occurred, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to preventing an AI Incident (fraud-related harm). It is not Complementary Information because the article focuses on the new AI system deployment rather than updates or responses to past incidents. It is not an AI Incident because no harm has yet occurred due to the AI system's malfunction or use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsFairnessTransparency & explainabilityAccountability

Industries
Government, security, and defenceFinancial and insurance servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalEconomic/Property

Severity
AI hazard

Business function:
ICT management and information securityMonitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

全國首創AI辨識人臉阻詐 台南ATM領款要「脫口罩」 - 社會 - 自由時報電子報

2025-04-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the article details the use of AI facial recognition technology to identify individuals at ATMs and prevent fraud. The AI system's use is intended to directly prevent harm to people by stopping fraudulent financial transactions, which constitute harm to property and potentially to individuals' financial security. Since the system is actively used to prevent realized fraud incidents, this qualifies as an AI Incident due to the direct involvement of AI in preventing harm related to fraud. The article describes the system's deployment and intended use to block fraud, which is a realized harm scenario rather than a mere potential risk or complementary information.
Thumbnail Image

台南防詐騙 5月起試辦ATM未露全臉警鈴響

2025-04-15
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the use of facial recognition technology to detect masked or obscured faces at ATMs. The system's use is intended to prevent fraud by identifying potential scam 'carriers' withdrawing money. While no actual harm is reported yet, the AI system's deployment aims to reduce fraud-related harm, which is a form of harm to property and communities. Since the system is being trialed and no harm has yet occurred, but the AI use could plausibly prevent or lead to harm, this qualifies as Complementary Information about an AI system deployment and its preventive role, rather than an AI Incident or Hazard. The article focuses on the trial and intended deterrent effect, not on an incident or a hazard event causing or risking harm.
Thumbnail Image

防止詐騙!ATM「沒露全臉」警報響 台南下月起試辦

2025-04-15
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the ATM camera uses AI to detect whether the user's full face is visible. The system's use is aimed at preventing fraud, which is a form of harm to individuals (financial harm) caused by criminal activity. The AI system's deployment directly contributes to harm prevention by detecting suspicious behavior and triggering alarms. Since the system is being trialed and no harm from malfunction or misuse is reported, but the system's use is directly linked to preventing harm, this qualifies as an AI Incident because the AI system's use is directly related to preventing harm from fraud, which is a recognized harm to persons. The event is not merely a product launch or general AI news, but a concrete deployment of an AI system with direct implications for harm prevention.
Thumbnail Image

台南推民眾ATM提款要脫口罩 律師提醒要有法源依據 | 聯合新聞網

2025-04-15
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) actively used to detect masked faces at ATMs, triggering alarms to prevent fraud, which is a direct harm prevention measure. The AI system's use is linked to preventing financial harm to people, fulfilling the harm criterion. The article also discusses legal and privacy concerns, but these do not negate the fact that the AI system is currently deployed and influencing outcomes. Therefore, this is an AI Incident due to the direct use of AI causing or preventing harm, with legal and rights issues involved.
Thumbnail Image

全國首創AI辨識人臉阻詐 台南ATM領款要「脫口罩」(台南市政府提供) - 自由電子報影音頻道

2025-04-15
video.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the system uses AI facial recognition technology to identify potential fraudsters at ATMs. The AI system's use is intended to prevent harm (financial fraud) by stopping fraudulent withdrawals. Although no harm is reported as having occurred, the system's deployment is a direct intervention to prevent harm. Since the article describes the system's planned trial and intended use to prevent fraud, and no actual harm or incident has yet occurred, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to preventing an AI Incident (fraud-related harm). It is not Complementary Information because the article focuses on the new AI system deployment rather than updates or responses to past incidents. It is not an AI Incident because no harm has yet occurred due to the AI system's malfunction or use.
Thumbnail Image

台南防詐騙 5月起試辦ATM未露全臉警鈴響 | 聯合新聞網

2025-04-15
聯合新聞網
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, performing facial recognition to identify whether ATM users show their full face. The system's use is intended to prevent fraud, which is a form of harm to property and communities. However, the article describes a planned trial starting in May, with no actual harm or incident reported yet. The system's deployment is a preventive measure, and the harm is potential if fraud occurs without such measures. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to preventing or detecting harm, but no harm has yet occurred or been reported.
Thumbnail Image

首創AI辨識人臉阻詐 台南ATM領款要脫口罩 - 社會 - 自由時報電子報

2025-04-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved (AI facial recognition). The system is intended to prevent fraud, which is a form of harm to individuals (harm to persons through financial fraud). Although the system is not yet deployed but planned to be trialed next month, the article implies a credible risk of fraud that the AI system aims to prevent. Since the harm is not yet realized but the AI system's use could plausibly prevent or lead to harm, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

防詐騙!台南5月起ATM領錢「需脫口罩」 沒露臉警鈴響

2025-04-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly involved in monitoring ATM users. The system's use is aimed at preventing fraud, which is a form of financial harm to people. The article describes the system's active use to prevent ongoing fraud, which is a realized harm scenario. Therefore, this qualifies as an AI Incident because the AI system's use directly relates to preventing harm (fraud) that has already occurred at scale, and the system is deployed to mitigate this harm. The event is not merely a potential risk (hazard) or a general update (complementary information), but an active intervention addressing an existing harm problem.
Thumbnail Image

ATM提款未露全臉警鈴大響 這縣市5月先試辦

2025-04-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for facial recognition to identify users at ATMs. The system's use is intended to prevent fraud by identifying masked individuals who may be involved in criminal activity. Although no actual harm is reported yet, the system's deployment aims to reduce fraud-related harm by enabling police to identify suspects and deter criminal behavior. Since the article describes a planned trial and the system's use could plausibly lead to preventing or causing harm (e.g., privacy concerns or false alarms), but no harm has yet occurred, this qualifies as an AI Hazard rather than an Incident. The focus is on the potential for harm prevention and deterrence, not on realized harm or ongoing incidents.
Thumbnail Image

ATM提款「未露全臉」警鈴大響! 這縣市5月起試辦

2025-04-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, used for facial recognition to detect masked or helmeted individuals at ATMs. The system's use is intended to prevent fraud by identifying and deterring money mules, which is a form of harm related to crime and financial fraud. However, the article describes a planned trial starting in the future, with no actual harm or incident reported yet. The AI system's deployment could plausibly lead to harm prevention but does not describe any realized harm or malfunction. Therefore, this event is best classified as an AI Hazard, as it involves the use of AI that could plausibly lead to preventing harm related to fraud, but no incident has yet occurred.
Thumbnail Image

打擊詐騙!ATM領錢「沒露全臉」警報大響 台南5月起試辦

2025-04-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly mentioned as being used in ATM areas to detect incomplete facial exposure. The system's use is intended to prevent fraud, which is a form of crime and harm to individuals and communities. However, the article describes the deployment and intended use of the AI system as a preventive measure, with no indication that any harm has yet occurred due to malfunction or misuse of the AI system itself. Therefore, this event represents a plausible future risk mitigation measure rather than an incident or hazard. It is primarily an update on governance and societal response to AI use in crime prevention, enhancing understanding of AI deployment in public safety contexts.
Thumbnail Image

政府擬設示警系統 禁戴口罩ATM提款

2025-04-17
on.cc東網
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as part of the facial recognition warning system designed to detect masked individuals at ATMs. The system's use is intended to prevent fraud-related harm by identifying potential criminal activity. Although the system is not yet deployed and no harm has occurred, the article indicates a credible potential for the AI system to prevent harm related to fraud. Since the system is still in the planning or trial phase and no actual harm or incident has been reported, this qualifies as an AI Hazard rather than an AI Incident. The event involves the use of AI with plausible future harm prevention, fitting the definition of an AI Hazard.
Thumbnail Image

ATM新規定5月上路!這縣市領錢「少1動作」警鈴大響 眾兩派戰翻 | 生活 | NOWnews今日新聞

2025-04-17
NOWnews今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly involved in the use phase to prevent fraud at ATMs. The system's deployment aims to reduce harm (fraud losses) but no actual harm or incident caused by the AI system is reported. The concerns about privacy and freedom are societal reactions rather than direct harms caused by the AI. Since the system is newly implemented and no incident of harm or malfunction is described, this qualifies as an AI Hazard due to the plausible future impact on fraud prevention and privacy issues. It is not Complementary Information because the article focuses on the new AI system's deployment and its potential effects, not on updates or responses to past incidents.
Thumbnail Image

台南推ATM提款要露全臉 律師警告:警察、行員可能官司纏身 | 聯合新聞網

2025-04-17
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition at ATMs) used by the government to detect masked faces to prevent fraud. The article focuses on the legal and rights issues arising from this use, including potential lawsuits and privacy violations, but does not report any actual harm or incident caused by the AI system. The concerns are about possible future harms and legal challenges if the policy is implemented as described. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as rights violations and legal disputes, but no incident has yet occurred.
Thumbnail Image

ATM領錢「沒露全臉」警報大響 1縣市5月起試辦

2025-04-16
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial recognition system to detect incomplete facial exposure at ATMs and trigger alerts. This involves AI system use but does not report any injury, rights violation, or other harm caused by the AI system itself. The AI system is used as a tool to prevent fraud and assist law enforcement, which is a governance response to an existing societal problem. There is no indication of malfunction, misuse, or plausible future harm caused by the AI system. Hence, the event is best classified as Complementary Information, describing a societal/governance response involving AI to address fraud prevention.
Thumbnail Image

台南阻詐開大絕! ATM領錢「沒露臉」將觸發警報-台視新聞網

2025-04-16
台視新聞網
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, performing facial recognition to detect whether users comply with the requirement to reveal their full face. The system's use is aimed at preventing fraud, which is a form of harm to individuals (financial harm). The AI system's operation directly contributes to reducing this harm by identifying potential fraudsters. Since the article describes the system's active use to prevent harm, this qualifies as an AI Incident under the definition of harm to persons or communities through fraud prevention.
Thumbnail Image

台南市率先推提款要脫安全帽跟口罩 黃偉哲:嚇阻車手

2025-04-15
聯合影音
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly involved in the use phase to detect masked or helmeted individuals at ATMs. The system's malfunction or use directly contributes to harm prevention by deterring fraudsters and aiding law enforcement. Since the AI system's deployment is actively preventing harm (fraud), and the event describes the system's use to reduce crime, this qualifies as an AI Incident due to its direct role in preventing harm to individuals and communities from fraud. The harm here is the fraud that the AI system helps to prevent, and the system's use is directly linked to reducing that harm.
Thumbnail Image

注意!ATM提款「未露全臉」警鈴大響 這縣市5月起試辦 | 生活焦點 | 要聞 | NOWnews今日新聞

2025-04-16
NOWnews今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being trialed for facial recognition at ATMs to detect full face visibility. The system's use is intended to prevent fraud by identifying and deterring scam-related ATM withdrawals, which is a direct use of AI to reduce harm related to financial crime and fraud. Although no specific incident of harm is described as having occurred yet, the system is actively used to prevent fraud and protect property and communities from harm. Since the system is in trial and actively used to prevent harm, and the article describes the deployment and operational use of AI for this purpose, this qualifies as an AI Incident because the AI system's use is directly linked to preventing and addressing harm related to fraud, which is a violation of property rights and harms communities. The system's role is pivotal in this harm prevention context.
Thumbnail Image

ATM新制5月登場!這縣市開第一槍 領錢「少1動作」警鈴大響

2025-04-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition) is explicitly mentioned and is being used to detect obscured faces at ATMs to prevent fraud, which is a form of crime harm to individuals and communities. However, the article describes the system's planned deployment and intended use to prevent fraud, with no indication that harm has yet occurred or that the system has malfunctioned. The event focuses on the potential to prevent harm rather than describing an incident where harm has already happened due to the AI system. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to preventing or causing harm, but no actual harm or incident is reported yet.
Thumbnail Image

ATM新制5月登場!這縣市開第一槍 領錢「少1動作」警鈴大響 | 生活 | 三立新聞網 SETN.COM

2025-04-18
setn.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in a real-world application aimed at preventing fraud, which is a harm to individuals and communities. Although no harm has yet been reported, the system is actively used to detect and prevent fraudulent withdrawals, which implies a direct use of AI to mitigate harm. Since the article focuses on the deployment and intended use rather than reporting any realized harm or malfunction, it does not qualify as an AI Incident. However, because the AI system is actively used to prevent harm, and the article describes its operational deployment, it is best classified as Complementary Information, providing context on societal and governance responses to AI in fraud prevention.