AI-Generated Misinformation About Kyoto City Spread on X

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A generative AI system on X (formerly Twitter) created and distributed a false news summary misattributing a policy from Nagahama City to Kyoto City. Kyoto City requested the removal of the misinformation, which X promptly deleted and apologized for. The incident caused public confusion and reputational harm in Japan.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (generative AI) was involved in creating the erroneous summary that led to the spread of false information about Kyoto City's policies. This misinformation was disseminated through an official channel, causing harm to the community by spreading false narratives about local government actions. Although the misinformation was removed promptly, the event constitutes an AI Incident because the AI-generated content directly led to the harm of spreading false information, which can be considered harm to communities and a violation of rights to accurate information.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

京都市巡りX公式が誤情報、生成AI作成か 削除要請

2025-10-22
日本経済新聞
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was involved in creating the erroneous summary that led to the spread of false information about Kyoto City's policies. This misinformation was disseminated through an official channel, causing harm to the community by spreading false narratives about local government actions. Although the misinformation was removed promptly, the event constitutes an AI Incident because the AI-generated content directly led to the harm of spreading false information, which can be considered harm to communities and a violation of rights to accurate information.
Thumbnail Image

AIが京都と滋賀を"勘違い"? Xに誤情報が掲載→京都市が注意喚起 「正しい発信をして」と同市

2025-10-22
ITmedia
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' produced a false summary that misrepresented the location of a municipal policy change, leading to the spread of misinformation on a public platform. This misinformation was significant enough to prompt official correction and removal requests, indicating realized harm to the community's information environment and potentially to the reputation of Kyoto City. The AI's malfunction in summarizing and attributing news content directly caused this harm. Hence, it meets the criteria for an AI Incident as the AI system's malfunction directly led to harm (misinformation dissemination).
Thumbnail Image

ネット上 京都市の誤情報

2025-10-21
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI integrated into X) was involved in producing and spreading false information about Kyoto City. This misinformation constitutes harm to the community by spreading false narratives that could mislead citizens. Since the misinformation was posted and required removal, the harm has occurred, making this an AI Incident. The AI's role in aggregating unrelated information directly led to the misinformation's creation and dissemination.
Thumbnail Image

京都市「単純ミスも処分対象に」は誤情報 XのAI要約ニュース拡散で京都市困惑「正しい情報を届けていただきたい」

2025-10-22
J-CAST ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) was involved in generating a false news summary that misattributed a policy change, leading to misinformation spreading on social media. However, there is no indication that this misinformation caused actual harm such as injury, rights violations, or disruption. The event involves the use and malfunction (incorrect output) of an AI system leading to misinformation, but the harm is limited to confusion and reputational concerns without realized damage. Therefore, this qualifies as Complementary Information because it provides an update on misinformation caused by AI summarization and the subsequent corrective actions, rather than constituting a direct AI Incident or a plausible future hazard.
Thumbnail Image

京都市巡りX公式が誤情報 生成AI作成か、削除要請

2025-10-22
神戸新聞
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used to create a summary that contained incorrect information about Kyoto City's policies, which was then distributed through an official social media channel. This misinformation caused harm by spreading false narratives about public administration, which can be considered harm to communities and a violation of the right to accurate information. Since the misinformation was actively distributed and required removal, the harm is realized, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

京都市巡りX公式が誤情報|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2025-10-22
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the misinformation was generated by an AI system and distributed via an official platform, leading to the spread of false information about a public administration. This constitutes a violation of rights related to accurate information and harms the community by spreading misinformation. Since the misinformation was actively distributed and required removal, the harm has materialized, qualifying this as an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

京都市巡りX公式が誤情報/生成AI作成か、削除要請 | 四国新聞社

2025-10-22
四国新聞社
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create a summary that contained false information about a public administration, which was then disseminated through an official AI-related service on a social media platform. This misinformation could harm public understanding and trust, constituting harm to communities. The harm occurred as the misinformation was actively distributed before deletion. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated false information.