AI-Generated Fake News Causes Food Safety Panic in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A man in Taiwan used AI to fabricate and spread false news and images on Facebook, claiming multiple people in Kaohsiung were poisoned by potatoes. The misinformation caused public fear, disrupted business operations, and required significant government resources to clarify. Authorities quickly investigated and prosecuted the individual under food safety laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly involved as the news reports were AI-generated fake news. The use of this AI-generated misinformation directly led to harm by causing public fear, social disruption, and economic harm to businesses, fulfilling the criteria for an AI Incident under violations of law and harm to communities. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
SafetyDemocracy & human autonomy

Industries
Food and beveragesMedia, social platforms, and marketing

Affected stakeholders
General publicBusiness

Harm types
Economic/PropertyPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

桃園男涉散播高雄人吃馬鈴薯製品中毒 桃檢起訴求重刑 | 聯合新聞網

2026-05-04
UDN
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the news reports were AI-generated fake news. The use of this AI-generated misinformation directly led to harm by causing public fear, social disruption, and economic harm to businesses, fulfilling the criteria for an AI Incident under violations of law and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

男散播「吃馬鈴薯中毒」AI假新聞 桃檢火速起訴求重刑

2026-05-04
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the man used AI to create fake news. The use of AI-generated misinformation directly caused harm to the community by inducing fear and social instability, which qualifies as harm to communities under the AI Incident definition. The event involves the use and misuse of an AI system leading to realized harm, thus it is classified as an AI Incident.
Thumbnail Image

臉書散播高雄人吃馬鈴薯中毒 男遭起訴求重刑 - 社會 - 自由時報電子報

2026-05-04
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake news being spread on social media, which is an AI system's output. The harm caused includes fear among the public, economic impact on businesses, and social instability, all of which fall under harm to communities. Since the AI-generated misinformation directly led to these harms, this qualifies as an AI Incident.
Thumbnail Image

52歲汪男散播食用馬鈴薯中毒假訊息 桃檢起訴請重量刑 | 大紀元

2026-05-04
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the individual used AI to fabricate false news reports about food poisoning incidents, which were then spread online. This AI-generated misinformation directly led to harm by causing public fear, economic impact on businesses, and social unrest. Therefore, the event meets the criteria for an AI Incident because the AI system's use directly caused harm to communities and violated public safety laws.
Thumbnail Image

52岁汪男散播食用马铃薯中毒假讯息 桃检起诉请重量刑 | 大纪元

2026-05-04
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate fake news that falsely reports food poisoning incidents, leading to public fear and social disruption. This constitutes harm to communities and a violation of legal obligations related to food safety information. Since the AI-generated misinformation has directly led to social harm and legal action, this qualifies as an AI Incident under the framework.
Thumbnail Image

桃園汪男散播食用馬鈴薯中毒假新聞 桃檢火速起訴 | 社會 | Newtalk新聞

2026-05-04
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate false news reports that caused public fear and affected the operations of related businesses, which constitutes harm to communities and a violation of legal obligations related to food safety. The AI system's use in creating and spreading misinformation directly led to social harm and legal consequences, meeting the criteria for an AI Incident.
Thumbnail Image

52歲汪男散播食用馬鈴薯中毒假訊息 桃檢起訴請重量刑| 台灣大紀元

2026-05-04
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the individual used AI to generate fake news about food poisoning, which caused public fear and affected the operations of related businesses, thus harming communities and property indirectly. The AI system's use in creating and disseminating false information directly contributed to these harms. Hence, this is an AI Incident due to the realized harm caused by AI-generated misinformation.
Thumbnail Image

攏係假!網PO「高雄驚傳馬鈴薯多人中毒」 男AI製作假新聞辯提醒...4天火速起訴

2026-05-04
mnews.tw
Why's our monitor labelling this an incident or hazard?
The use of AI to fabricate false news that caused public panic constitutes an AI Incident because the AI system's use directly led to harm to the community by spreading misinformation and causing social disruption. The event involves the use and misuse of an AI system to generate harmful content, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

AI圖造謠「高雄人吃馬鈴薯中毒」 桃檢起訴51歲男求處重刑|壹蘋新聞網

2026-05-04
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate synthetic images that were part of a false narrative causing harm to communities (social unrest and fear) and economic harm to businesses. The use of AI-generated fake news directly led to these harms, fulfilling the criteria for an AI Incident. The event describes realized harm caused by the AI system's outputs, not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

快新聞/散播假訊息!男子「AI偽造馬鈴薯中毒謠言」桃檢火速起訴建請重刑 - 民視新聞網

2026-05-04
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate false images and text spreading misinformation about food safety, which caused public fear and disrupted normal operations. The harm is realized and directly linked to the AI-generated content. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (social panic) and harm to property (business disruption).