AI-Generated Videos Spread False Government Subsidy Claims in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos falsely claimed the Taiwanese government would issue a NT$12,000 subsidy to seniors in 2026. Fact-checking organizations confirmed the videos are fabricated, with no such subsidy planned. Authorities warn the public against believing or sharing these AI-created misinformation videos, which could potentially mislead or scam vulnerable groups.[AI generated]

Why's our monitor labelling this an incident or hazard?

The videos are AI-generated and spread false claims about government subsidies, which could plausibly lead to harm by misleading the public, especially vulnerable groups like the elderly. However, since the fact-checking center has clarified the misinformation and no direct harm has been reported, this event represents a potential risk rather than realized harm. Therefore, it qualifies as an AI Hazard due to the plausible future harm from AI-generated misinformation.[AI generated]
AI principles
Transparency & explainabilitySafetyAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Economic/PropertyPublic interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

網傳「政府春節發1.2萬禮金?」 事實查核中心曝真相!

2025-12-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating false video content, which could cause harm by spreading misinformation (harm to communities). However, the article does not report that harm has materialized or that the misinformation has caused significant damage; instead, it focuses on fact-checking and debunking the false claim. This aligns with Complementary Information, as it updates and clarifies the situation regarding a prior AI-generated misinformation event rather than reporting a new incident or hazard.
Thumbnail Image

政府春節將發「1.2萬補助金」?事實查核中心說話了

2025-12-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content (AI system) used to create false videos spreading misinformation. However, the harm described is misinformation that has been disseminated but the article focuses on clarifying and debunking the false claim rather than reporting direct harm caused by the AI-generated videos. There is no indication that the misinformation has led to realized harm such as injury, rights violations, or significant community harm. The article's main focus is on the fact-checking response and public warning, which is a governance and societal response to AI-generated misinformation. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

春節發放「1.2萬補助金」? 查核中心澄清虛假訊息

2025-12-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The videos are AI-generated and spread false claims about government subsidies, which could plausibly lead to harm by misleading the public, especially vulnerable groups like the elderly. However, since the fact-checking center has clarified the misinformation and no direct harm has been reported, this event represents a potential risk rather than realized harm. Therefore, it qualifies as an AI Hazard due to the plausible future harm from AI-generated misinformation.
Thumbnail Image

網傳政府將發1萬2千元!事實查核中心打假:純屬虛構

2025-12-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the video is likely AI-generated, producing false information about government subsidies. The use of AI to create and spread misinformation poses a plausible risk of harm to communities by misleading the public and potentially causing confusion or misinformed decisions. However, since the misinformation has been officially debunked and no direct harm has been reported, the event does not meet the threshold for an AI Incident. Instead, it is an AI Hazard because the AI-generated content could plausibly lead to harm if believed or spread further.
Thumbnail Image

2026春節長者能領1.2萬?查核中心曝真相 小心個資外露 | 要聞 | NOWnews今日新聞

2025-12-28
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated videos spreading false information, which is a misuse of AI systems to create deceptive content. This misuse could plausibly lead to harm such as scams and personal data breaches, especially targeting vulnerable elderly populations. Although harm is not explicitly reported as having occurred, the potential for harm is credible and significant. The article's main focus is on fact-checking and warning the public, which aligns with providing complementary information about AI misuse and its societal impact rather than reporting a direct AI incident or hazard event. Therefore, it is best classified as Complementary Information.
Thumbnail Image

網傳「政府春節發1.2萬禮金?」 事實查核中心曝真相!

2025-12-28
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the video is AI-generated, producing false content about government subsidies. The misinformation could plausibly lead to harm by misleading the public, causing confusion or misinformed decisions, which constitutes harm to communities. Since no actual harm has been reported and the fact-checking center is clarifying the falsehood, the event is best classified as an AI Hazard, reflecting the plausible future harm from AI-generated misinformation rather than an AI Incident where harm has already occurred.
Thumbnail Image

政府春節將發「1.2萬補助金」?事實查核中心說話了 | 生活 | 三立新聞網 SETN.COM

2025-12-28
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated synthetic videos spreading false information about government subsidies. While the AI system is used to create misleading content, the article clarifies that the subsidy does not exist and no harm has yet materialized. The AI-generated misinformation could plausibly lead to harm if believed and acted upon by the public, such as confusion or misallocation of resources, but the article focuses on warning and fact-checking. Therefore, this qualifies as an AI Hazard due to the plausible future harm from AI-generated misinformation, rather than an AI Incident where harm has already occurred.
Thumbnail Image

網傳政府春節發錢「1.2萬入袋」? 事實查核中心曝真相│TVBS新聞網

2025-12-28
TVBS
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in generating false videos spreading misinformation. While misinformation can harm communities, the article states the claims are false and no such subsidy exists, so no realized harm from the AI-generated content is reported. The main focus is on fact-checking and clarifying the truth, which fits the definition of Complementary Information. There is no direct or indirect harm caused by the AI system's use described here, nor a plausible future harm from the AI system's development or use in this context. Hence, it is not an AI Incident or AI Hazard.
Thumbnail Image

網傳政府將發1萬2千元!事實查核中心打假:純屬虛構

2025-12-28
東森美洲電視
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the videos are likely AI-generated, which is explicitly mentioned. The event concerns the use of AI to create and spread false information (misinformation) that could potentially mislead the public. However, the article does not report any realized harm such as financial loss, injury, or rights violations resulting from this misinformation. The main focus is on clarifying and debunking the false claim, making this a case of misinformation dissemination with AI involvement but no confirmed harm yet. Therefore, this qualifies as an AI Hazard because the AI-generated content could plausibly lead to harm (e.g., public confusion or misinformation impact) but no direct harm has been documented in the article. It is not Complementary Information because the main subject is the misinformation event itself, not a response or update to a prior incident. It is not an AI Incident because no actual harm has occurred yet.