SK Telecom Launches AI-Driven Family Alert System to Prevent Voice Phishing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

SK Telecom has enhanced its AI-powered call app, A.Dot, with a 'Family Care' feature that detects suspected voice phishing calls and immediately alerts up to 10 registered guardians via SMS or push notifications. This system aims to prevent financial and psychological harm from scams by enabling rapid family intervention in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as detecting suspicious voice phishing calls and alerting family members to prevent harm. The AI system is actively used in operation, and its role is pivotal in preventing financial and psychological harm to users. Since the AI system's use directly addresses and prevents harm to people, this qualifies as an AI Incident under the framework's criteria for harm to persons (a).[AI generated]
Industries
Consumer servicesDigital security

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

SK텔레콤, '가족 케어'로 보이스피싱 예방 강화

2026-04-27
이코노뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as detecting suspicious voice phishing calls and alerting family members to prevent harm. The AI system is actively used in operation, and its role is pivotal in preventing financial and psychological harm to users. Since the AI system's use directly addresses and prevents harm to people, this qualifies as an AI Incident under the framework's criteria for harm to persons (a).
Thumbnail Image

SKT, 보이스피싱 의심되면 AI가 가족에 알린다 - 전파신문

2026-04-27
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in detecting voice phishing calls and alerting others, which directly aims to prevent harm to individuals from fraud. The AI's use in identifying suspicious calls and notifying family members is a use of AI that directly relates to preventing injury or harm to persons (harm category a). Since the system is actively deployed and functioning to reduce harm from voice phishing, this qualifies as an AI Incident rather than a hazard or complementary information. The harm is financial and psychological harm from scams, which is a recognized form of injury or harm to persons. Therefore, this event is classified as an AI Incident.
Thumbnail Image

SKT 에이닷 전화, 보이스피싱 탐지하면 가족에 알린다

2026-04-27
핀포인트뉴스
Why's our monitor labelling this an incident or hazard?
The AI system is actively used to detect and prevent voice phishing calls, which are a form of fraud causing harm to individuals. The AI's detection and alerting function directly contributes to reducing injury or harm to persons by preventing financial and psychological harm from scams. Therefore, this event involves the use of an AI system that has directly led to harm prevention, qualifying it as an AI Incident under the definition of harm to persons or groups through AI use.
Thumbnail Image

AI가 보이스피싱 잡고 가족에게 알림... SKT 에이닷 '가족 케어' 출시

2026-04-27
포인트경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as detecting voice phishing calls and alerting family members to prevent harm. Voice phishing causes financial and psychological harm to individuals, which fits the definition of harm to persons. The AI system's use here is to detect and block such calls, directly impacting the prevention of harm. Since the article describes the deployment and use of this AI system to address an existing harm (voice phishing scams) and prevent further incidents, it qualifies as an AI Incident. The AI system's role is pivotal in detecting suspicious calls and enabling protective actions, thus directly linked to harm prevention.
Thumbnail Image

SKT, AI가 보이스피싱 탐지 시 가족에게 즉시 알린다

2026-04-27
이뉴스투데이
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as detecting voice phishing calls and triggering immediate alerts to protect users from fraud-related harm. Voice phishing causes psychological and financial harm to individuals, and the AI's role in early detection and alerting is central to preventing such harm. Since the AI system's use directly relates to preventing injury or harm to persons, this qualifies as an AI Incident under the definition of harm to persons. The article does not merely describe a potential risk or a general AI feature update but details an AI system actively used to mitigate a real and increasing harm, thus meeting the criteria for an AI Incident.
Thumbnail Image

SKT 에이닷, 보이스피싱 탐지 시 '가족에 알림' 전송 - 월요신문

2026-04-27
월요신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as detecting voice phishing during calls and triggering alerts to family members. The harm involved is financial fraud (voice phishing), which is a form of harm to persons/groups. Since the AI system's use directly aims to prevent harm and the article does not report any actual harm caused by the AI system or its malfunction, but rather a protective function, this is not an incident. However, the AI system's use in detecting and alerting about voice phishing plausibly prevents harm, and the article focuses on the deployment of this AI detection feature as a protective measure. Therefore, this is best classified as Complementary Information, as it provides context on societal and technical responses to AI to mitigate harm, rather than reporting an incident or hazard.
Thumbnail Image

SKT '에이닷 전화', '가족 케어' 통해 보이스피싱 위험 가족에 알림

2026-04-27
데일리한국
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in detecting voice phishing calls and sending alerts to prevent harm. The harm in question is financial fraud, which constitutes injury or harm to persons. Since the AI system's use directly aims to prevent such harm and is actively deployed in this context, this qualifies as an AI Incident involving the use of AI leading to harm prevention. The event is not merely a product announcement but describes a concrete AI application addressing a real and increasing harm (voice phishing).
Thumbnail Image

SKT, 에이닷 전화 '가족 케어' 기능 추가...보이스피싱 탐지 시 가족에게 즉시 알림

2026-04-27
이투데이
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it detects voice phishing calls in real-time. The use of this AI system directly aims to prevent harm to individuals by alerting family members to potential scams, which addresses harm to persons (psychological and financial harm from fraud). Since the AI system's use is directly linked to preventing or mitigating harm, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to harm or its prevention. The article describes the system's active role in harm prevention, not just potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"AI로 보이스피싱 위협 알림"...SKT, 에이닷 전화 '가족 케어' 탑재

2026-04-27
디지털투데이 (DigitalToday)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in detecting voice phishing during calls and sending alerts to protect users and their families. The system's use directly aims to prevent harm to people by mitigating voice phishing scams, which are a form of financial injury. Since the AI system's use is directly linked to preventing harm, and the article describes its active deployment and function, this qualifies as an AI Incident under the definition of harm to persons through the use of AI.
Thumbnail Image

SKT 에이닷, 보이스피싱 탐지 시 보호자에 즉시 알림

2026-04-27
뉴스핌
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as detecting voice phishing calls and triggering alerts to protect users and their guardians. The event involves the use of AI in real-time to prevent or reduce harm from voice phishing scams, which have caused significant financial losses. Since the AI system's use directly aims to prevent harm and is actively deployed, this qualifies as an AI Incident involving harm to property and persons through fraud prevention.
Thumbnail Image

"보이스피싱 의심시 가족에 알림"⋯SKT, 에이닷 전화 '가족 케어' 도입

2026-04-27
아이뉴스24
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as detecting voice phishing calls and triggering alerts to family members to prevent harm. The harm involved is financial and psychological harm from voice phishing scams, which is a clear harm to persons. Since the AI system's use directly contributes to preventing this harm, the event involves the use of an AI system with a direct link to harm prevention. Therefore, this is not a hazard or complementary information but an AI Incident because the AI system's operation is directly linked to harm mitigation in a context where harm is a known and ongoing issue.
Thumbnail Image

"검찰입니다" 보이스피싱, AI가 가족한테 알려준다

2026-04-27
미디어오늘
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed by SK Telecom, KT, and LG Uplus to detect voice phishing calls and alert family members or block malicious numbers and servers. Voice phishing causes significant financial harm and psychological distress to victims, which qualifies as harm to people and communities. The AI systems' real-time detection and notification functions directly reduce this harm by enabling timely intervention and blocking scams. Hence, this is an AI Incident because the AI system's use is directly linked to preventing and mitigating actual harm from voice phishing crimes, which are ongoing and have caused substantial damage.
Thumbnail Image

SKT, 에이닷 전화에 '가족 케어' 기능 추가...보이스피싱 의심 전화 알린다

2026-04-27
와이드경제
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in detecting voice phishing calls and triggering alerts to protect users and their families from potential financial and psychological harm. Since voice phishing causes direct harm to individuals and communities, and the AI system's use here is to prevent such harm, this event involves the use of an AI system directly linked to harm prevention. Therefore, it qualifies as an AI Incident because the AI system's use is directly related to preventing harm from a known and increasing threat (voice phishing).
Thumbnail Image

[통신 HOT 뉴스] SK텔레콤, AI로 '보이스피싱' 막는다..."에이닷 전화 '가족 케어' 기능 선봬"

2026-04-27
비즈월드
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it detects suspicious voice phishing activity during calls. The AI's use directly aims to prevent harm to individuals by alerting protectors before or during a scam attempt, thus addressing injury or harm to persons (financial and psychological harm from voice phishing). Since the AI system's use is directly linked to preventing or mitigating harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm prevention related to health and financial safety of persons.
Thumbnail Image

SKT 에이닷 전화, 보이스피싱 탐지 '가족 케어' 기능 출시

2026-04-27
포인트데일리
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in detecting voice phishing during calls and triggering alerts to prevent harm. Voice phishing causes financial harm and emotional distress, which qualifies as harm to persons or groups. Since the AI system's use directly aims to detect and mitigate such harm, and the article reports the system's active deployment and impact (blocking millions of scam calls), this constitutes an AI Incident involving the use of AI leading to harm prevention. The event is not merely a product launch without harm, but the AI system's role in harm detection and prevention is central, and the harms addressed are real and significant. Therefore, this is classified as an AI Incident.
Thumbnail Image

SKT, 29년 연속 국가고객만족도 1위...전 산업군 '유일' - 월요신문

2026-04-27
월요신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect voice phishing calls in real-time and to send alerts to protect users and their families from potential financial fraud. This use of AI directly aims to prevent harm to individuals by identifying and mitigating fraudulent calls, which constitutes harm to persons. Therefore, the event involves the use of an AI system that directly contributes to preventing injury or harm, qualifying it as an AI Incident.
Thumbnail Image

'AI로 짠 그물망'⋯ 이통 3사, 보이스피싱 원천 봉쇄

2026-04-28
브릿지경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in real-time detection and blocking of voice phishing scams, which are a form of fraud causing financial harm to individuals. The AI systems' deployment and operation directly contribute to preventing or mitigating harm to people, fulfilling the criteria for an AI Incident. The article reports on actual use and impact, not just potential risk or future harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the AI systems' active role in harm prevention, which is a direct response to an existing harm problem. Therefore, this qualifies as an AI Incident.