NHK Halts AI-Driven Multilingual Subtitles Over Translation Error

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

NHK discontinued its AI-based multilingual subtitles service after its Google Translate-powered system mistakenly rendered 'Senkaku Islands' as 'Diaoyu Islands' during a live broadcast. The error raised diplomatic and accuracy concerns, prompting the broadcaster to end the service and consider developing its own AI translation system.[AI generated]

Why's our monitor labelling this an incident or hazard?

NHK’s multi-language subtitle service relied on an AI translation system that directly produced incorrect and politically inappropriate output, triggering regulatory intervention and the service’s shutdown. This is a realized harm caused by an AI system malfunction, fitting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
GovernmentBusinessGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

NHK、AIの多言語字幕サービス終了 尖閣諸島を中国名で表示のため:時事ドットコム

2025-02-12
時事ドットコム
Why's our monitor labelling this an incident or hazard?
NHK’s multi-language subtitle service relied on an AI translation system that directly produced incorrect and politically inappropriate output, triggering regulatory intervention and the service’s shutdown. This is a realized harm caused by an AI system malfunction, fitting the definition of an AI Incident.
Thumbnail Image

NHK国際放送ネット配信、AI自動翻訳で尖閣諸島を釣魚島と表示(朝日新聞)

2025-02-12
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
This report centers on NHK’s response and remediation—ending its use of an AI translation service—after a mis-translation rather than on any new or ongoing harm or a prospective risk. It is therefore complementary information: an update on the handling and mitigation of an AI output error, rather than a standalone incident or hazard.
Thumbnail Image

NHK、AI翻訳の多言語字幕を終了 尖閣を中国名で表示

2025-02-12
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event describes a concrete malfunction of an AI system in production—its automated translations mis‐labelled the Senkaku Islands with the Chinese term “釣魚島,” causing misinformation and undermining trust. This is a realized harm stemming from an AI system’s use, so it qualifies as an AI Incident rather than a mere potential risk or unrelated news.
Thumbnail Image

NHK 多言語字幕サービス終了へ 英語によるライブ配信の字幕にミス「尖閣諸島」を「釣魚島」と表記(スポニチアネックス)

2025-02-12
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
An AI translation system malfunctioned in production, producing incorrect and politically sensitive content on multiple occasions. This misinformation directly stemmed from the AI’s outputs and led NHK to end the service. As a deployed AI system’s repeated errors caused real‐world misinformation and reputational harm, this qualifies as an AI Incident.
Thumbnail Image

NHK、AIの多言語字幕サービス終了 尖閣諸島を中国名で表示のため(時事通信)

2025-02-12
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article reports NHK shutting down its AI-based translation feature after it displayed the politically sensitive Chinese name for the Senkaku Islands. While this was a malfunction of an AI translation system, it did not cause physical injury, rights violations, or other direct harm, nor does the story focus on potential future hazards. Instead, it centers on NHK’s remediation (ending the service) in response to the mistranslation. This fits the definition of complementary information, as it details a corrective measure following an AI output error rather than describing a new incident or hazard.
Thumbnail Image

NHKがネット配信ニュースでAI翻訳ミス...「尖閣諸島」を「釣魚島」と表示、多言語字幕サービス終了

2025-02-12
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
This event describes a malfunction of an AI system (automatic translation) that produced erroneous output, directly resulting in misinformation during a broadcast. Even though no physical harm occurred, the mis-translation constitutes a significant error with geopolitical implications. Therefore, it qualifies as an AI Incident due to the AI system’s malfunction and the resulting misinformation.
Thumbnail Image

NHK 多言語字幕サービス終了へ 英語によるライブ配信の字幕にミス「尖閣諸島」を「釣魚島」と表記

2025-02-12
毎日新聞
Why's our monitor labelling this an incident or hazard?
An AI system (Google’s translation API) malfunctioned during live broadcasts, producing incorrect place names that amounted to misinformation. This error directly arises from the use of the AI system and led NHK to cease the service. Therefore, it qualifies as an AI Incident due to the realized harm of misleading translation outputs.
Thumbnail Image

NHK国際放送ネット配信、AI自動翻訳で尖閣諸島を釣魚島と表示:朝日新聞デジタル

2025-02-12
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
An AI translation system malfunctioned—producing incorrect subtitles that misrepresented contested territory names—resulting in misinformation with diplomatic and reputational implications. This constitutes an AI Incident, since the AI system’s erroneous output directly led to a harmful event (misinformation) and prompted NHK to terminate the service.
Thumbnail Image

尖閣諸島を「釣魚島」と表示 NHK国際放送の中国語字幕 AI使用サービスとりやめ(産経新聞)

2025-02-12
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI translation) was explicitly used and malfunctioned by producing inaccurate subtitles that misrepresented territorial names, which is a sensitive political issue. This misinformation was broadcasted live, potentially causing harm to communities by spreading inaccurate geopolitical information. NHK recognized the problem and stopped the AI-based subtitle service. The AI system's malfunction directly led to the harm of inaccurate information dissemination, fitting the definition of an AI Incident under harm to communities or violation of rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

NHKがネット配信ニュースでAI翻訳ミス...「尖閣諸島」を「釣魚島」と表示、多言語字幕サービス終了(読売新聞オンライン)

2025-02-12
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI translation) was used in live news subtitle generation and produced an incorrect translation of a politically sensitive term. This error led to NHK ending its multilingual subtitle service, indicating a direct harm related to misinformation and potential violation of rights to accurate information. The AI system's malfunction directly led to this harm, fitting the definition of an AI Incident.
Thumbnail Image

NHK、AI自動翻訳の多言語字幕サービスを終了 中国語の字幕で「釣魚島」表示が見つかり (2025年2月12日) - エキサイトニュース

2025-02-12
Excite
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's AI translation API) whose outputs led to the display of politically sensitive content in subtitles. This caused a reputational or social harm related to the broadcast content, which can be considered a violation or harm to communities due to misinformation or politically sensitive misrepresentation. The harm is realized as the subtitles were actually displayed and caused issues, leading to the service termination. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (social/political harm through inappropriate translation).
Thumbnail Image

「釣魚島」表記で終了したNHK国際放送の多言語字幕 字幕生成は米Googleの翻訳API

2025-02-14
ITmedia
Why's our monitor labelling this an incident or hazard?
An AI system (Google's translation API) was used to generate real-time multilingual subtitles, and its output included a politically sensitive term that caused NHK to end the service. Although the AI system's output led to a problematic situation, the article does not describe any direct or indirect realized harm such as injury, rights violations, or disruption. The event is about the discovery of a problematic AI output and the consequent governance response (service termination). Therefore, this is best classified as Complementary Information, as it provides context on AI system use, governance challenges, and responses, without describing an AI Incident or AI Hazard involving actual or plausible harm.
Thumbnail Image

尖閣諸島を「釣魚島」と表示 NHK国際放送の中国語字幕 AI使用サービスとりやめ

2025-02-12
産経ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI translation) was explicitly used to generate subtitles for NHK's international broadcast. The AI's output directly led to the display of inaccurate and politically sensitive information, which NHK recognized as a significant issue and discontinued the service. The event involves the use and malfunction (inaccuracy) of an AI system leading to misinformation dissemination, which is a form of harm to communities and a breach of obligations for accurate information. Therefore, this qualifies as an AI Incident. The event is not merely a potential risk (hazard) or a complementary update; it involves realized harm caused by the AI system's outputs.
Thumbnail Image

NHK、AI自動翻訳の多言語字幕サービスを終了 中国語の字幕で「釣魚島」表示が見つかり:紀伊民報AGARA|和歌山県のニュースサイト

2025-02-12
agara.co.jp
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automatic translation via Google Translate API) whose output caused an inappropriate subtitle display. The issue was detected and led to immediate termination of the service, indicating a mitigation response. There is no evidence that the AI system directly caused harm such as injury, rights violations, or significant community harm. The article focuses on the service termination and the problem found, which is a response to a prior or potential issue. Therefore, it fits the definition of Complementary Information, providing an update on AI system use and its governance response rather than reporting a realized AI Incident or a plausible future hazard.