KBS AI Subtitle Error Broadcasts Profanity During Artemis II Launch

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During a live YouTube broadcast of NASA's Artemis II launch, KBS used an AI system for real-time translation subtitles. The AI mistranslated technical terms into Korean profanity, resulting in offensive language being aired. KBS apologized, implemented immediate corrective actions, and pledged to strengthen AI filtering to prevent recurrence.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly involved in the live translation process, and its malfunction (misinterpretation of words) directly led to the harm of exposing offensive language to the public during a national broadcast. While the harm is reputational and social rather than physical, it constitutes harm to communities and public trust. The broadcaster's response and mitigation efforts are noted but do not negate the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during use.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicBusiness

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"굴러 이X아"...KBS, AI 욕설 자막 생중계 참사 - 매일경제

2026-04-03
mk.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the live translation process, and its malfunction (misinterpretation of words) directly led to the harm of exposing offensive language to the public during a national broadcast. While the harm is reputational and social rather than physical, it constitutes harm to communities and public trust. The broadcaster's response and mitigation efforts are noted but do not negate the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during use.
Thumbnail Image

"굴러, 이X아"...KBS, 욕설 자막 내보냈다 "번역 오류 사과" - 스타투데이

2026-04-03
mk.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating real-time subtitles during a live broadcast. The AI's mistranslation led to the display of offensive language, which caused reputational harm and public criticism. Although the harm is not physical or legal rights violation, the incident caused harm to the community's trust and the broadcaster's credibility, which qualifies as harm to communities. The AI system's malfunction directly led to this harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

"로저, 굴러"...KBS, 아르테미스 비속어 자막에 "진심으로 사과" | 연합뉴스

2026-04-03
연합뉴스
Why's our monitor labelling this an incident or hazard?
An AI system was used for automatic real-time translation and subtitle generation, which malfunctioned by producing offensive language instead of correct technical terms. This caused harm in the form of inappropriate content exposure to the audience, which can be considered harm to communities or reputational harm. The incident stems from the AI system's malfunction during use. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm (exposure to offensive content) and KBS is responding to mitigate the issue.
Thumbnail Image

KBS, AI 오류로 비속어 자막 노출 사과..."개선책 모색 중" - 연예 | 기사 - 더팩트

2026-04-03
더팩트
Why's our monitor labelling this an incident or hazard?
An AI system was involved in the automatic translation and subtitle generation, and it malfunctioned by producing inappropriate subtitles. The broadcaster acknowledged the issue and is taking steps to prevent recurrence. The incident caused viewer criticism and reputational damage but no direct or indirect harm fitting the AI Incident criteria (such as injury, rights violations, or significant community harm). The main focus is on the broadcaster's response and improvement efforts, making this a Complementary Information case rather than an AI Incident or AI Hazard.
Thumbnail Image

KBS, 아르테미스2 AI 욕설 자막 사고·사후 대응도 미흡

2026-04-02
아시아투데이
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used for real-time translation subtitles, and its malfunction directly led to the harm of reputational damage and public offense due to the inappropriate use of profanity in a live broadcast. Although the harm is non-physical, it affects community trust and broadcasting professionalism, which can be considered harm to communities or a violation of broadcasting standards. Therefore, this qualifies as an AI Incident. The incident also includes a response and mitigation effort, but the primary event is the AI malfunction causing harm.
Thumbnail Image

KBS, 아르테미스 2호 생중계 중 "굴러 X년아?"... "AI 번역 오류, 진심으로 사과" - 일간스포츠

2026-04-02
일간스포츠
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the live translation process, and its malfunction directly led to the display of offensive content during a public broadcast. While the harm is primarily reputational and social (viewer offense and trust erosion), it does not meet the threshold for physical injury, critical infrastructure disruption, or legal rights violations. The incident is a clear case of AI malfunction causing harm (offensive content dissemination), thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"굴러 X년아"...KBS, 아르테미스 2호 생중계 중 'AI 번역 오류' 욕설이? - 동행미디어 시대

2026-04-02
동행미디어 시대
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the automatic translation and subtitle generation process. The malfunction led to the direct display of offensive language to the public, which constitutes harm to communities through exposure to inappropriate content. Although the harm is non-physical, it is a clear and significant harm caused by the AI system's malfunction during its use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

"공영방송서 AI 믿다가 '대참사'"⋯KBS 생중계 중 '욕설 자막' 그대로 송출

2026-04-03
아이뉴스24
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating real-time subtitles during a live broadcast. The AI malfunctioned by incorrectly translating words into offensive language, which was directly broadcast to the public, causing harm to the community through exposure to inappropriate content and reputational damage to the public broadcaster. This constitutes harm to communities and a breach of expected content standards. Therefore, this event qualifies as an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

KBS, 美 나사 자막 오류 사과..."AI 필터링, 개선하겠다"

2026-04-03
디스패치 | 뉴스는 팩트다!
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used for automatic translation subtitles, and its malfunction (misinterpretation of technical terms leading to offensive language) directly caused reputational harm and potential viewer distress. Although no physical harm or legal violation is reported, the incident constitutes a clear harm related to the AI system's malfunction during its use. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous output during live broadcasting.
Thumbnail Image

"굴러, 이 X아"..KBS, AI 비속어 자막 송출 사과 "욕설 필터링 강화"[전문] | 스타뉴스

2026-04-03
스타뉴스
Why's our monitor labelling this an incident or hazard?
An AI system was used for live translation, and its malfunction directly led to the harm of broadcasting offensive language, which can be considered harm to communities due to exposure to inappropriate content. Although the harm is non-physical, it is a clear negative impact caused by the AI system's erroneous output. Therefore, this qualifies as an AI Incident.
Thumbnail Image

공영방송 맞나?... KBS 생중계 덮친 'AI 욕설' 번역 [지금이뉴스]

2026-04-03
YTN
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the real-time translation and subtitle generation. The malfunction (misinterpretation of words leading to offensive subtitles) directly caused harm by exposing the audience to inappropriate content, which harms community trust and violates broadcasting norms. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during use.
Thumbnail Image

굴러 이×아" 비속어 자막 띄운 KBS, 'AI 번역' 참사에 "진심으로 사과

2026-04-03
서울신문
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used for real-time translation, and its malfunction directly led to the harm of displaying offensive language to viewers, including children, which can be considered harm to communities and a violation of broadcasting standards. The incident is a clear case where the AI system's malfunction caused harm, qualifying it as an AI Incident rather than a hazard or complementary information.