AI-Generated Fake Images Used in Disinformation Campaign Against Taiwanese Baseball Fans

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated fake images falsely depicting Taiwanese baseball fans littering at Tokyo Dome were spread online, primarily by accounts linked to Chinese disinformation networks. The images, later debunked by Japanese media, caused reputational harm and fueled misinformation, highlighting the malicious use of AI in coordinated disinformation campaigns targeting Taiwan.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Google's Gemini AI) was explicitly used to generate fake images that were then spread to mislead the public and defame Taiwanese people. The harm is realized as the misinformation damages the reputation of a community, which falls under harm to communities and violations of rights. The event involves the use and misuse of an AI system to cause harm, meeting the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

慣犯又造謠!AI照抹黑台人垃圾丟滿東蛋 被起底曾貼「殯儀館堆屍假照」

2026-03-10
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system (Google's Gemini AI) was explicitly used to generate fake images that were then spread to mislead the public and defame Taiwanese people. The harm is realized as the misinformation damages the reputation of a community, which falls under harm to communities and violations of rights. The event involves the use and misuse of an AI system to cause harm, meeting the criteria for an AI Incident.
Thumbnail Image

台灣球迷在東京巨蛋亂丟垃圾?她揭中共網軍認知作戰劇本 | 政治快訊 | 要聞 | NOWnews今日新聞

2026-03-10
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images used in a disinformation campaign, which is an AI system's use leading to misinformation. However, the article does not report actual realized harm such as injury, rights violations, or significant disruption caused by the AI content, only the potential for harm and the exposure of the campaign. The main focus is on revealing the AI-generated fake images and the disinformation tactics, which aligns with providing complementary information about AI misuse and societal/governance responses. Therefore, it does not meet the threshold for an AI Incident or AI Hazard but is important contextual information about AI-related disinformation.
Thumbnail Image

台灣球迷被抹黑東京巨蛋亂丟垃圾 林楚茵:台灣發光中共急小動作 - 臺北市 - 自由時報電子報

2026-03-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated fake images (AI deepfakes) used maliciously to spread false information, which harms the reputation of a group (Taiwanese fans) and misleads the public. This constitutes a violation of rights and harm to communities through misinformation. Since the AI-generated content has already been disseminated and caused reputational harm, this qualifies as an AI Incident. The article also discusses the societal and governance responses to this misuse, but the primary focus is on the harm caused by the AI-generated fake images.
Thumbnail Image

東京巨蛋滿地垃圾AI假圖!中國網軍「徐芳麗」造謠非首次 | 政治 | Newtalk新聞

2026-03-10
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fake images (AI-generated fake photos) that were used to spread false information. The harm caused is reputational and informational, affecting communities and public perception, which fits the definition of harm to communities. The event involves the use of AI in a malicious way to create and spread misinformation, which is a direct AI Incident under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

洪浦釗觀點》東京巨蛋假圖事件:一場針對台灣形象的資訊戰 | 國際 | Newtalk新聞

2026-03-10
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake image that was deliberately spread to mislead and harm Taiwan's social image, which is a clear case of harm to communities through misinformation. The AI-generated image directly contributed to the spread of false information, causing reputational damage and social harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm. The article also discusses the broader implications and the importance of fact-checking, but the primary focus is on the realized harm caused by the AI-generated fake image.
Thumbnail Image

影音/造謠「WBC中華隊承認打假球」被法辦 50歲男頻道歉稱「嚇到了」│TVBS新聞網

2026-03-10
TVBS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of AI-generated fake images used to spread false information. The misinformation has led to reputational harm and social disruption, which constitutes harm to communities. Additionally, a person spreading false claims was legally pursued for defamation, indicating recognized harm. The AI system's use in generating and spreading false content directly contributed to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

AI造假台灣球迷「東蛋留滿地垃圾」照片 林楚茵:帳號IP在香港 | 政治 | 三立新聞網 SETN.COM

2026-03-10
三立新聞
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create fabricated images (AI fake photos) that falsely accuse a group of people of misconduct, leading to misinformation and reputational damage. This misinformation campaign is a form of harm to communities and violates rights to accurate information. The article confirms the AI-generated nature of the images and the malicious intent behind their spread, fulfilling the criteria for an AI Incident due to realized harm caused by AI-generated disinformation.
Thumbnail Image

快新聞/東京巨蛋垃圾照是AI假圖 林楚茵曝中國網軍操作「2內幕」 - 民視新聞網

2026-03-10
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake images that were disseminated to mislead and harm the reputation of Taiwanese baseball fans. The harm is realized in the form of misinformation and reputational damage, which affects communities and public perception. The article confirms the AI-generated nature of the images and the malicious intent behind their creation and spread, fulfilling the criteria for an AI Incident. The involvement is through the use of AI-generated content in a disinformation campaign, directly leading to harm to communities.
Thumbnail Image

台灣球迷在東蛋亂丟垃圾?他揭真相:疑認知作戰 - 民視新聞網

2026-03-13
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the image showing Taiwanese fans littering was AI-generated and was disseminated widely to mislead and damage Taiwan's image. The AI system's role in generating false content that is used for disinformation and social manipulation fits the definition of an AI Incident, as it directly leads to harm to communities and societal trust. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation and its social consequences.