Naver and Korean Police Deploy AI Triple Defense to Block Phishing Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Naver and the Korean National Police have partnered to deploy an AI-powered 'triple prevention system' to block telecommunication financial fraud, such as voice phishing and investment scams. The system uses AI for spam filtering, real-time account restrictions, and malicious app detection to proactively prevent scam attempts on online platforms in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as Naver uses AI-based spam filtering and malicious app detection to prevent fraud. The AI system's use is directly linked to preventing harm to people (financial fraud victims), which falls under harm to persons or groups. Since the AI system's use is intended to prevent harm and the article describes active deployment and cooperation to stop ongoing fraud attempts, this qualifies as an AI Incident where the AI system's use has directly led to harm prevention and is part of addressing existing harms. Therefore, this is an AI Incident rather than a hazard or complementary information.[AI generated]
Industries
Financial and insurance servicesDigital security

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

네이버-경찰청, 보이스피싱·리딩방 뿌리 뽑는다...'3중 예방망' 구축 맞손 - 매일경제

2026-02-24
mk.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Naver uses AI-based spam filtering and malicious app detection to prevent fraud. The AI system's use is directly linked to preventing harm to people (financial fraud victims), which falls under harm to persons or groups. Since the AI system's use is intended to prevent harm and the article describes active deployment and cooperation to stop ongoing fraud attempts, this qualifies as an AI Incident where the AI system's use has directly led to harm prevention and is part of addressing existing harms. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

네이버, 경찰청과 보이스피싱·투자리딩 사기 예방 나선다...MOU 체결

2026-02-24
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for spam filtering, malicious app detection, and account restriction based on police data to prevent fraud. The harms addressed include financial fraud and scams (harm to property and individuals). Since the AI systems are actively used to prevent and mitigate these harms, this qualifies as an AI Incident involving the use of AI systems leading to harm prevention. The event is not merely a future risk or a general update but describes concrete AI system deployment to counteract ongoing harms.
Thumbnail Image

플랫폼 피싱 사전에 막는다...경찰, 네이버와 '삼중 차단망' 가동 - 사회 | 기사 - 더팩트

2026-02-24
더팩트
Why's our monitor labelling this an incident or hazard?
An AI system (AI spam filtering) is explicitly mentioned as part of the phishing prevention mechanism. The AI system is used to detect and block suspicious content, which directly aims to prevent harm to people from financial scams (harm to people). Since the article describes the deployment and use of the AI system to prevent harm, and the harm is ongoing or anticipated, but no actual harm event is described as having occurred due to AI malfunction or misuse, this is a case of an AI system being used to prevent harm rather than causing harm. Therefore, this is not an AI Incident or AI Hazard. Instead, it is a governance and societal response involving AI to mitigate AI-related or online harms, which fits the definition of Complementary Information.
Thumbnail Image

경찰청·네이버, 플랫폼 피싱 '삼중 차단망' 구축

2026-02-24
Head Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for spam filtering and malicious app detection to prevent phishing crimes. Although no harm has yet occurred due to this system, the AI system's use could plausibly lead to preventing AI-related harms such as financial fraud and social trust erosion. Since the event focuses on preventing harm before it happens, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the deployment of AI systems to prevent harm, not on updates or responses to past incidents.
Thumbnail Image

네이버, 경찰청과 '전기통신금융사기 피해예방 및 근절' 위한 MOU - 굿모닝경제

2026-02-24
굿모닝경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for detecting and preventing telecommunication financial fraud, which is a form of crime causing harm to individuals (harm to persons). The AI is used in active prevention and mitigation roles, indicating its use in harm prevention rather than causing harm. Since the event describes the deployment and use of AI systems to prevent harm rather than an incident where AI caused harm, it does not qualify as an AI Incident. However, because the AI system's use could plausibly lead to preventing significant harm and is a concrete application with potential impact, it is best classified as Complementary Information, providing context on societal and governance responses to AI-related crime prevention.
Thumbnail Image

네이버, 경찰청과 전기통신금융사기 근절 MOU...AI '3중 예방망' 가동

2026-02-24
아시아투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for detecting and preventing telecommunication financial fraud, which directly addresses harm to individuals (harm to health or property through financial loss). The AI systems are actively used in operation to block scam attempts and malicious apps, thus preventing harm. This constitutes an AI Incident because the AI's use is directly linked to preventing realized or ongoing harm from scams, which are a form of harm to persons and communities. The article does not merely describe potential future harm or general AI developments but details active AI deployment to counteract ongoing criminal harm.
Thumbnail Image

네이버, 경찰청과 '전기통신금융사기 피해예방 및 근절' 위한 MOU 체결

2026-02-24
비즈니스포스트
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering, detection of suspicious content, and malicious app detection to prevent telecommunication financial fraud. The AI system's use is intended to prevent harm to users from scams and fraud, which are forms of harm to persons and communities. Since the AI system's involvement is in preventing harm and no actual harm or incident is reported as having occurred, this event represents a plausible risk mitigation effort rather than a realized harm. Therefore, it is best classified as Complementary Information, as it provides information about societal and technical responses to AI-related fraud risks, enhancing understanding of AI's role in harm prevention.
Thumbnail Image

네이버, 경찰청과 '보이스피싱' 차단 나선다...AI 기반 '3중 예방망' 가동

2026-02-24
쿠키뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based spam filtering and detection systems to identify and block fraudulent posts and accounts related to voice phishing and investment scams. The AI system's role is to prevent harm by stopping scams before they reach users, which is a clear use of AI to mitigate potential harm. There is no indication of harm caused by the AI system itself, nor is there a plausible risk of harm from the AI system's malfunction. Instead, the AI system is part of a harm prevention strategy. Therefore, this event is best classified as Complementary Information, as it provides an update on societal and technical responses to AI-related fraud risks rather than describing an AI Incident or AI Hazard.
Thumbnail Image

잡았다 요놈"... 네이버-경찰청 '보이스피싱·투자리딩방 사기' 척결에 맞손

2026-02-24
GS칼텍스, 인도네시아서 바이오원료 생산 개시... 원료 확보부터 판매까지 '밸류체인' 완성
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering, detection of suspicious accounts, and malicious app detection to prevent telecommunication financial fraud. The AI systems are used in the operational phase to prevent harm by detecting and blocking fraudulent activities before they cause damage. Since the AI systems are actively used to prevent harm and no harm is reported as having occurred due to AI malfunction or misuse, this event is best classified as Complementary Information. It describes a governance and technical response to an existing AI-related risk, enhancing understanding of AI's role in combating fraud, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

네이버, 경찰청과 '전기통신금융사기 피해예방·근절' 맞손

2026-02-24
이뉴스투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (spam filtering AI, security modules detecting malicious apps) in the active prevention of telecommunication financial fraud, which directly relates to harm to persons (financial harm from scams). Since the AI systems are used to prevent and mitigate ongoing or potential harm, and the article describes concrete measures to detect and block fraudulent activities, this constitutes an AI Incident where AI use has directly contributed to harm prevention. The event is not merely about potential harm but about active use of AI to address existing fraud risks, which are a form of harm to persons and communities. Therefore, it qualifies as an AI Incident.
Thumbnail Image

네이버-경찰청, '전기통신금융사기 피해예방 · 근절' MOU

2026-02-24
뉴스프리존
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (spam filtering AI, automated detection systems) in the prevention and mitigation of telecommunication financial fraud, which is a form of harm to persons (financial harm). The AI systems are used in their operation to detect and block fraudulent activities, thus directly contributing to harm prevention. Since the event describes the deployment and use of AI systems to prevent realized or ongoing harm, it is related to the use of AI systems to address an AI Incident context (fraud harms). However, the article focuses on the cooperation and preventive measures rather than describing a specific incident of harm caused by AI malfunction or misuse. Therefore, this event is best classified as Complementary Information, as it provides supporting information about societal and technical responses to AI-related fraud risks and mitigation efforts.
Thumbnail Image

네이버, 경찰청과 전기통신금융사기 피해예방·근절 맞손

2026-02-24
포인트데일리
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Naver uses AI-based spam filtering and detection systems trained on police data to identify and prevent fraudulent activities. The AI system's use directly contributes to preventing harm to individuals' financial assets (harm to persons/groups). Since the AI system is used to prevent and mitigate ongoing or potential fraud harm, this event relates to the use of AI to address an existing or ongoing harm scenario. However, the article does not report an actual harm caused by AI malfunction or misuse but rather the deployment of AI to prevent harm. Therefore, this is not an AI Incident but rather complementary information about societal and governance responses involving AI to reduce harm.
Thumbnail Image

네이버, 경찰청과 맞손...AI '3중 예방망'으로 플랫폼 범죄 차단

2026-02-24
핀포인트뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering, detection of suspicious posts, and automated detection of malicious apps to prevent crimes like voice phishing and investment fraud. These AI systems are actively used to prevent harm to users, which constitutes harm to communities and individuals' property (financial assets). Since the AI systems' use directly leads to preventing harm, and the event involves realized harm prevention rather than just potential risk, this qualifies as an AI Incident under the framework.
Thumbnail Image

네이버, 경찰청과 전기통신금융사기 피해예방ㆍ근절 맞손

2026-02-24
이투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for fraud detection and prevention, which directly addresses harms related to financial fraud and protection of users' property and assets. Although no specific incident of harm is reported, the AI systems are actively used to prevent ongoing fraud attempts, which implies realized harm prevention. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to preventing harm from telecommunication financial fraud, a form of harm to property and communities.
Thumbnail Image

네이버-경찰청, AI 기술로 보이스피싱 잡는다...사기 근절 MOU

2026-02-24
매일일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for filtering scam keywords, detecting malicious apps, and restricting fraudulent accounts, all aimed at preventing telecommunication financial fraud (voice phishing and investment scams). Since these frauds cause harm to individuals' property and financial well-being, the AI system's deployment to counteract these harms is directly linked to an AI Incident. The event is not merely a future risk or a general update but describes active use of AI to address ongoing harm, qualifying it as an AI Incident.
Thumbnail Image

네이버, 경찰청과 보이스피싱 등 범죄 예방 협력...'AI 3중 예방망' 가동

2026-02-24
브릿지경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering, detection of suspicious accounts, and malicious app detection to prevent voice phishing and other telecommunication financial frauds. These AI systems are directly involved in preventing harm to individuals from financial crime, which constitutes harm to persons. Since the AI systems are actively used to prevent and mitigate ongoing criminal activities, this qualifies as an AI Incident where the AI system's use directly leads to harm prevention. The event is not merely a future risk or a general update but describes active deployment of AI to address realized harms, thus fitting the AI Incident classification.
Thumbnail Image

네이버-경찰청, 피싱 범죄 근절 맞손... AI 기술로 '3중 예방망' 가동

2026-02-24
포인트경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems trained on crime data to filter spam and detect malicious apps, and real-time data sharing to suspend fraudulent accounts. These AI systems are actively used to prevent phishing and financial fraud, which are harms to individuals and communities. The AI involvement is in the use phase, directly contributing to harm prevention. Therefore, this event qualifies as an AI Incident because the AI systems' deployment directly addresses and mitigates harm from criminal activities.
Thumbnail Image

네이버-경찰청, '보이스피싱·투자리딩방 근절' 공동 전선 구축

2026-02-24
데일리안
Why's our monitor labelling this an incident or hazard?
The AI systems are actively used to prevent harm from phishing and fraud, which are crimes causing financial and personal harm to individuals. Since the AI is employed to detect and block fraudulent activities proactively, the event involves the use of AI systems to prevent harm. However, the article does not describe any realized harm caused by AI malfunction or misuse; rather, it focuses on AI-enabled prevention efforts. Therefore, this event is best classified as Complementary Information, as it provides details on societal and technical responses to AI-related crime prevention, enhancing understanding of AI's role in mitigating harm but not describing a new AI Incident or Hazard.
Thumbnail Image

"사기 번호 즉시 정지" 네이버-경찰청, 보이스피싱 근절 '맞손'

2026-02-24
디지털데일리
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used by Naver to detect and prevent voice phishing and telecommunication fraud, which are ongoing harms affecting individuals and communities. The AI systems are part of the operational response to these harms, directly linked to the use of AI in mitigating and preventing further harm. Since the AI systems are actively involved in addressing realized harms (fraud and scams), this qualifies as an AI Incident rather than a hazard or complementary information. The collaboration and deployment of AI for harm prevention in this context is a direct response to existing AI-related harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

네이버·경찰청, 전기통신금융사기 근절 맞손...'3중 예방망' 가동

2026-02-24
마이데일리
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering and malicious app detection to prevent telecommunication financial fraud, which is a form of harm to people (financial harm). The AI systems are actively used to detect and prevent fraud attempts, thus directly contributing to harm prevention. Since the event describes the deployment and use of AI systems to prevent harm rather than an incident of harm caused by AI, it does not qualify as an AI Incident. It also does not describe a plausible future harm scenario but rather an ongoing protective measure. Therefore, it fits best as Complementary Information, providing context on societal and governance responses to AI-related fraud risks.
Thumbnail Image

네이버-경찰청, MOU 체결

2026-02-24
bikorea.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (spam filtering AI, automatic detection systems) in active use to prevent and mitigate telecommunication financial fraud, which is a form of harm to persons (financial harm). Since the AI systems are being used to prevent and reduce harm, and the event describes ongoing use rather than a malfunction or a potential future risk, it does not describe an incident where harm has already occurred due to AI malfunction or misuse. Instead, it describes a proactive use of AI to prevent harm. Therefore, this event is best classified as Complementary Information, as it provides information about societal and technical responses to AI in combating fraud, enhancing understanding of AI's role in harm prevention.
Thumbnail Image

네이버, 경찰청과 전기통신금융사기 근절 MOU...AI 기반 '3중 예방망' 가동 | 아주경제

2026-02-24
아주경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for spam filtering, real-time detection, and blocking of fraudulent activities, which directly contribute to preventing financial fraud and protecting users from harm. Since the AI systems are actively used to prevent and mitigate ongoing fraud incidents that cause harm to individuals' financial security, this qualifies as an AI Incident under the definition of harm to persons or groups (financial harm) caused directly or indirectly by AI system use.
Thumbnail Image

피싱 범죄 대응⋯네이버, 경찰청과 전기통신금융사기 피해 예방·근절 협력

2026-02-24
inews24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for filtering scam keywords, detecting malicious apps, and restricting accounts linked to criminal phone numbers to prevent phishing crimes. The AI systems' use directly addresses and mitigates harm to individuals' financial security and personal data, which constitutes injury or harm to groups of people. Therefore, this event involves the use of AI systems leading to harm prevention, qualifying it as an AI Incident rather than a hazard or complementary information.