
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
During a live YouTube broadcast of NASA's Artemis II launch, KBS used an AI system for real-time translation subtitles. The AI mistranslated technical terms into Korean profanity, resulting in offensive language being aired. KBS apologized, implemented immediate corrective actions, and pledged to strengthen AI filtering to prevent recurrence.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the live translation process, and its malfunction (misinterpretation of words) directly led to the harm of exposing offensive language to the public during a national broadcast. While the harm is reputational and social rather than physical, it constitutes harm to communities and public trust. The broadcaster's response and mitigation efforts are noted but do not negate the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during use.[AI generated]