KT Upgrades AI Call Manager to End Abusive Calls

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

KT has enhanced its AI-powered call manager service to safeguard public officials and customer service staff by automatically issuing warnings and terminating calls when abusive language or prolonged conversations are detected. The system also records calls and provides text conversion, offering improved safety for public institutions and businesses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being used to detect abusive language and automatically end calls, which directly protects users from verbal abuse, a form of harm to persons. This constitutes a direct use of AI to prevent harm, thus qualifying as an AI Incident. The event describes realized harm (verbal abuse) and the AI system's role in mitigating it, not just a potential future risk or a general update without harm. Therefore, it is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityRobustness & digital securityFairnessAccountabilitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Consumer servicesGovernment, security, and defenceIT infrastructure and hostingDigital security

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/Property

Severity
AI incident

Business function:
Citizen/customer serviceMonitoring and quality control

AI system task:
Event/anomaly detectionInteraction support/chatbotsOther


Articles about this incident or hazard

Thumbnail Image

"폭언하면 전화 끊는다" KT통화매니저, 상담원 보호 기능 강화

2025-03-09
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it uses AI technologies like speech-to-text and automated detection of abusive language to manage calls. The system's use directly leads to harm prevention by protecting employees from verbal abuse, which is a form of harm to health and well-being. Since the AI system is actively used to prevent harm rather than causing harm, and the article focuses on the deployment and features of the system rather than any incident of harm or malfunction, this event does not describe an AI Incident or AI Hazard. Instead, it provides complementary information about AI deployment and societal response to protect workers, enhancing understanding of AI's positive role in this context.
Thumbnail Image

폭언 시 통화 종료⋯KT 통화매니저, 직원 보호 기능 강화

2025-03-09
inews24
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect abusive language and manage calls, which qualifies as an AI system. However, the article does not report any actual harm caused by the AI system or any malfunction. Instead, it describes new protective features designed to prevent harm to employees from abusive callers. This fits the definition of Complementary Information, as it provides supporting information about AI system use and societal responses to protect workers, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

"폭언하면 통화종료"...KT 통화매니저 업그레이드

2025-03-09
비즈니스워치
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect abusive language and automatically end calls, which directly protects users from verbal abuse, a form of harm to persons. This constitutes a direct use of AI to prevent harm, thus qualifying as an AI Incident. The event describes realized harm (verbal abuse) and the AI system's role in mitigating it, not just a potential future risk or a general update without harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

"폭언하면 통화 종료" KT, '통화매니저' 서비스 이용 보호 기능↑ - 네이트뷰

2025-03-09
네이트뷰
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('KT Call Manager') that processes and analyzes voice calls to detect abusive language and manage call termination. This AI system is actively used to protect employees from harm (verbal abuse), which is a form of harm to persons. Since the AI system's use directly leads to preventing or mitigating harm during calls, this qualifies as an AI Incident under the definition of harm to persons through the use of AI systems. The article reports the deployment and use of this AI system with realized protective effects, not just potential risks or general information, so it is not a hazard or complementary information.