Google Translate Mistranslates Korean Cultural Terms, Causing Controversy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google Translate, an AI-powered translation service, has been criticized for mistranslating 'Dokdo' as 'Takeshima' (the Japanese name for the disputed territory) and 'Kimchi' as 'Paochai' (a different Chinese dish). These errors have sparked public outcry in South Korea over cultural misrepresentation and misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

Google Translate is an AI system used for language translation. The errors in translating culturally significant terms have caused misinformation and cultural misrepresentation, which can be considered harm to communities and cultural rights. Since the harm is occurring through the use of the AI system's outputs, this qualifies as an AI Incident involving violation of cultural rights and harm to communities. The article also mentions ongoing efforts to correct these errors, but the primary event is the occurrence of the translation errors causing harm.[AI generated]
AI principles
AccountabilityRobustness & digital security

Industries
Consumer services

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

구글 번역기에 '독도·김치' 돌렸더니...'다케시마·파오차이'가? - 매일경제

2026-04-27
mk.co.kr
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system involved in the event. The translation errors represent a malfunction or misuse of the AI system that leads to misinformation about culturally sensitive terms, which can be considered harm to communities or violation of rights. However, the article focuses on the identification of these errors and the efforts to correct them, not on a realized harm incident or a plausible future harm scenario. Thus, it does not meet the threshold for an AI Incident or AI Hazard but fits the definition of Complementary Information, providing updates and context on AI system performance and societal responses.
Thumbnail Image

구글 번역기에 '독도·김치' 넣었더니...'다케시마·파오차이'로 | 연합뉴스

2026-04-27
연합뉴스
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system used for language translation. The errors in translating culturally significant terms have caused misinformation and cultural misrepresentation, which can be considered harm to communities and cultural rights. Since the harm is occurring through the use of the AI system's outputs, this qualifies as an AI Incident involving violation of cultural rights and harm to communities. The article also mentions ongoing efforts to correct these errors, but the primary event is the occurrence of the translation errors causing harm.
Thumbnail Image

구글, 독도→다케시마·김치→파오차이 '번역' 오류

2026-04-28
Wow TV
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system involved in the event, as it performs automated language translation using AI models. The errors in translation reflect issues in the AI system's outputs. However, the article does not describe any realized harm such as injury, rights violations, or operational disruption caused by these errors. Nor does it describe a credible risk of future harm beyond the current inaccuracies. The main focus is on highlighting and advocating for correction of these translation errors, which fits the definition of Complementary Information as it provides supporting data and context about AI system impacts and societal responses without reporting a new incident or hazard.
Thumbnail Image

구글 번역 오류 계속...'독도·김치'를 '다케시마·파오차이'로

2026-04-28
YTN
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system used for language translation. The article discusses errors in its output for specific terms, which is a malfunction or limitation of the AI system. However, there is no indication that these errors have directly or indirectly caused harm such as human rights violations or other significant harms as defined. The article focuses on the problem and efforts to address it, without reporting an actual incident of harm. Therefore, this is best classified as Complementary Information, providing context and updates on AI system performance and societal responses.
Thumbnail Image

'독도=다케시마' '김치=파오차이'⋯구글 번역기 논란

2026-04-28
매일방송
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system used for language translation. The mistranslation of 'Dokdo' as 'Takeshima' (a disputed name) and 'Kimchi' as 'Paochai' (a different food) reflects the AI system's outputs causing harm to communities by spreading misleading or politically sensitive information. This harm is realized as it affects public perception and cultural identity. Therefore, this event meets the criteria for an AI Incident due to the AI system's use leading to harm to communities and cultural rights.
Thumbnail Image

독도는 다케시마, 김치는 파오차이?...구글 번역기 오류 심각

2026-04-28
아시아경제
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Google Translate, location apps) that generate outputs (translations, labels) influencing public perception. The mislabeling of 'Dokdo' as 'Takeshima' and 'Kimchi' as 'Paochai' are errors caused by these AI systems' outputs, leading to misinformation and cultural harm. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to communities and violation of cultural rights. The article also mentions ongoing efforts to correct these errors, but the harm is occurring as described.
Thumbnail Image

'독도는 다케시마, 김치는 파오차이'...구글 번역기 또 오류 논란

2026-04-28
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system performing language translation. The incorrect translations of culturally significant terms like 'Dokdo' (a disputed territory) and 'Kimchi' (a national food) can be seen as causing harm to communities by spreading misinformation and misrepresenting cultural identity. This is a violation of rights related to cultural and informational integrity. The AI system's malfunction (translation errors) directly leads to this harm. Therefore, this event meets the criteria for an AI Incident due to harm to communities and violation of rights caused by the AI system's outputs.
Thumbnail Image

구글 번역기, '독도'는 '다케시마'로...'김치'는 '파오차이'

2026-04-28
국민일보
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Translate) producing incorrect translations that have cultural and political implications. However, there is no indication that these mistranslations have directly or indirectly caused harm such as injury, rights violations, or disruption. The article focuses on identifying the problem and ongoing efforts to address it, which fits the definition of Complementary Information. It is not an AI Incident because no harm has materialized, nor an AI Hazard because no plausible future harm is explicitly indicated. It is not unrelated because it concerns AI translation systems.
Thumbnail Image

제미나이 똑똑한데 번역기는 왜...구글, 독도·김치 또 '오답' [지금이뉴스]

2026-04-28
YTN
Why's our monitor labelling this an incident or hazard?
Google Translate is an AI system that generates language translations. The mistranslation of 'Dokdo' as 'Takeshima' (a Japanese territorial claim) and 'Kimchi' as 'Paochai' (a different food) are direct outputs of the AI system leading to misinformation and cultural harm. This misrepresentation can be seen as harm to communities and a violation of cultural rights, fulfilling the criteria for an AI Incident. The article describes actual harm occurring due to the AI system's use, not just potential or future harm, so it is not an AI Hazard or Complementary Information.