AI-generated deepfakes fuel misinformation in Taiwan election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During Taiwan's presidential election, AI-generated deepfake videos and synthetic audio circulated on social media, falsely depicting candidate Lai Ching-te's running mate Hsiao Bi-khim as a foreigner and fabricating statements. Experts warn such AI-driven disinformation undermines democratic processes and echoes global risks flagged in WEF’s 2024 Global Risks Report.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article does not describe any realized harm caused by AI-generated misinformation but rather warns about the plausible future risk of such AI-generated content leading to social and political harm. Since no specific incident of harm has occurred yet, but the risk is credible and significant, this qualifies as an AI Hazard.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

WEF研究報告:大選在即 惡意假訊息成全球最大風險 | 國際 | 三立新聞網 SETN.COM

2024-01-10
三立新聞
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm caused by AI-generated misinformation but rather warns about the plausible future risk of such AI-generated content leading to social and political harm. Since no specific incident of harm has occurred yet, but the risk is credible and significant, this qualifies as an AI Hazard.
Thumbnail Image

報告:惡意假信息成全球最大風險  15:31

2024-01-11
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article discusses the potential and ongoing societal risks posed by AI-driven misinformation but does so at a high-level, strategic, and anticipatory level without detailing a concrete incident or harm caused by an AI system. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context and expert assessment about AI-related risks in the global ecosystem, aiding understanding and future risk management.
Thumbnail Image

台灣選前惡意假訊息湧入 世界經濟論壇認證:全球最大風險 - 政治 - 自由時報電子報

2024-01-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation and AI-driven misinformation spread) in the active dissemination of malicious false information that harms political candidates and the democratic process in Taiwan. This harm to communities and potential violation of political rights has already occurred, making it an AI Incident. The article describes realized harm caused by AI-enabled misinformation campaigns, not just potential or future harm, and thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

世界經濟論壇:AI、極端天候是未來幾年全球頭號風險 | 聯合新聞網

2024-01-11
UDN
Why's our monitor labelling this an incident or hazard?
The article centers on a risk report forecasting that AI technologies, especially generative AI like ChatGPT, could be used to spread misinformation and manipulate public opinion, potentially leading to social unrest and political instability. However, it does not describe any concrete AI incident or harm that has already occurred. The AI involvement is in the context of plausible future risks rather than realized harm. Therefore, this qualifies as an AI Hazard, reflecting credible potential for harm but no actual incident yet.
Thumbnail Image

AI假訊息、極端氣候 全球最大風險 | 聯合新聞網

2024-01-11
UDN
Why's our monitor labelling this an incident or hazard?
The article focuses on expert opinions and survey results about potential future risks from AI-generated misinformation and extreme weather, without reporting any actual AI-related harm or incident occurring at present. It highlights plausible future harms that AI systems could cause, such as manipulation of public opinion and social instability, but these remain risks rather than realized events. Therefore, this qualifies as an AI Hazard, as it describes credible potential harms from AI use in misinformation but does not document an actual AI Incident or complementary information about responses or mitigation.
Thumbnail Image

WEF:AI假造訊息 經濟大敵 | 聯合新聞網

2024-01-11
UDN
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI like ChatGPT) and their potential misuse to spread misinformation, which could plausibly lead to significant harm such as social unrest, violence, and undermining government legitimacy. However, the article does not report any realized harm or specific event caused by AI misinformation. Therefore, it fits the definition of an AI Hazard, as it highlights credible risks that could plausibly lead to an AI Incident in the future.
Thumbnail Image

WEF研究報告:大選在即 惡意假訊息成全球最大風險 | 國際焦點 | 國際 | 經濟日報

2024-01-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential risks posed by AI-generated misinformation and disinformation in upcoming elections, emphasizing plausible future harms such as social and political division. However, it does not report any actual event where AI systems have directly or indirectly caused harm. Therefore, this qualifies as an AI Hazard, as it warns about credible risks that could plausibly lead to AI Incidents but have not yet materialized.
Thumbnail Image

WEF:AI假造訊息 經濟大敵 極端天氣是最大長期危機 | 國際焦點 | 國際 | 經濟日報

2024-01-11
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the risk of AI-generated misinformation as a major short-term global risk that could lead to societal harm such as violence and political instability. It involves AI systems (generative AI like ChatGPT) and their potential misuse. However, it does not report any actual harm or incident that has occurred, only the plausible risk of such harm in the near future. Hence, it fits the definition of an AI Hazard, where AI system use could plausibly lead to an AI Incident but no direct or indirect harm has yet materialized.
Thumbnail Image

台灣大選:被惡意假訊息灌爆

2024-01-13
RFI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos and audio that spread false information during elections, which directly harms communities by undermining democratic processes and potentially violating rights related to fair political participation. The harm is realized and ongoing, not just a potential risk. Therefore, this qualifies as an AI Incident due to the direct and active use of AI-generated misinformation causing harm.
Thumbnail Image

報告:錯、假信息成全球最大風險 - 20240112 - 國際

2024-01-11
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event discusses the widespread use of misinformation and disinformation as a major global risk, which aligns with known AI capabilities in generating and spreading false information. Although no specific AI incident is reported, the plausible future harm from AI-driven misinformation campaigns is clearly articulated, fitting the definition of an AI Hazard. There is no indication of a realized harm directly attributed to AI in this report, so it cannot be classified as an AI Incident. The report is not merely complementary information about AI developments or governance responses but highlights a credible risk involving AI's role in misinformation.
Thumbnail Image

《全球風險報告》:AI虛假信息威脅居榜首 | 人工智能 | ChatGPT | 操作輿論 | 新唐人电视台

2024-01-10
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks posed by AI-generated misinformation and its capacity to manipulate public opinion and elections globally. While it clearly identifies AI as a key factor in these risks, it does not report any actual incidents of harm occurring due to AI misuse. Instead, it presents a credible warning about plausible future harms from AI-driven disinformation campaigns. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk that AI systems could plausibly lead to significant harm in the near future, but no specific AI Incident has yet materialized.
Thumbnail Image

WEF研究報告:大選在即 惡意假訊息成全球最大風險 | 國際 | 中央社 CNA

2024-01-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated misinformation and disinformation as a significant risk that could plausibly lead to harm, such as social and political polarization and disruption. However, it does not describe any specific realized harm or incident caused by AI systems, but rather warns of potential future harms. Therefore, this qualifies as an AI Hazard, as the AI system's use in generating false information could plausibly lead to an AI Incident involving harm to communities and political processes.
Thumbnail Image

WEF報告:AI驅動的錯誤資訊是全球最大短期威脅 | Anue鉅亨 - 美股雷達

2024-01-10
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential risks and threats posed by AI-generated misinformation and malicious uses of AI, such as deepfakes and automated cyberattacks, which could plausibly lead to significant harms including societal polarization and undermining democratic processes. However, it does not report any actual incident where AI has directly or indirectly caused harm. Therefore, it fits the definition of an AI Hazard, as it describes credible future risks stemming from AI development and use without evidence of realized harm.
Thumbnail Image

兩年內 AI 是最大亂源,十年內真正威脅是氣候災難

2024-01-12
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-driven misinformation as a top risk that could destabilize societies and governments, leading to polarization and conflict. This aligns with the definition of an AI Hazard, where AI use could plausibly lead to harm but no specific incident of harm is reported. The report is a high-level risk assessment rather than a report of an actual AI Incident or a complementary update on a known event. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

AI深偽造假致社會兩極化 WEF報告:成全球首要風險

2024-01-11
公共電視
Why's our monitor labelling this an incident or hazard?
The article centers on a risk report and expert opinions about the potential for AI-generated misinformation and deepfakes to cause societal harm, especially around elections, which is a credible future risk. There is no description of a specific AI system causing harm or malfunctioning, nor an event where harm has already occurred. The focus is on the identification and ranking of risks, making it a forward-looking assessment rather than a report of an incident or hazard. This fits the definition of Complementary Information, as it provides context and understanding of AI-related risks without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Taiwan voters face flood of pro-China disinformation

2024-01-10
Daily Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake videos, which are AI-generated manipulated media, as part of a coordinated disinformation campaign. This campaign has already influenced public opinion and voter perceptions, causing harm to the democratic process and communities in Taiwan. The AI system's development and use in creating deepfakes and spreading misinformation directly led to these harms, fitting the definition of an AI Incident.
Thumbnail Image

Taiwan voters face flood of pro-China disinformation

2024-01-10
Courier-Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfakes, which are AI-generated manipulated videos, as part of a disinformation campaign targeting Taiwan's voters. This campaign has already caused harm by spreading false claims and misleading content that discredits political candidates and influences public opinion. The AI system's involvement in creating and distributing these videos directly leads to harm to communities and violates democratic rights, fitting the definition of an AI Incident.
Thumbnail Image

Taiwan voters face flood of pro-China disinformation | Fox 11 Tri Cities Fox 41 Yakima

2024-01-10
FOX 11 41 Tri Cities Yakima
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfakes (AI-generated manipulated videos) and coordinated misinformation campaigns that have already been disseminated and viewed by millions, directly impacting voters and the election environment. This constitutes a violation of rights and harm to communities through misinformation and election interference. The AI system's role in generating deepfakes and spreading disinformation is pivotal to the harm described, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taiwan voters face flood of pro-China disinformation

2024-01-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfakes, which are AI-generated manipulated videos, as part of a sustained disinformation campaign linked to Beijing aimed at discrediting political candidates in Taiwan. The disinformation campaign has already impacted voters by spreading false claims and misleading content, thus causing harm to the democratic process and political rights. The AI system's use in generating deepfakes and disseminating misinformation directly leads to harm as defined under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taiwan Election: Pro-China Disinfo Hits Taiwanese Voters, Anti-China Candidates - News18

2024-01-10
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfakes and AI-generated videos as part of a disinformation campaign that is actively influencing voters and spreading false claims. This disinformation harms the democratic process and the rights of political candidates, which falls under harm to communities and violations of rights. The AI system's use in generating and disseminating manipulated content directly leads to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taiwan voters face flood of pro-China disinformation

2024-01-10
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake videos, which are AI-generated manipulated media, as part of a sustained and coordinated disinformation campaign. This campaign is linked to Beijing and targets Taiwan's election, aiming to discredit candidates and influence public opinion. The harm caused includes misinformation that undermines democratic processes and political rights, fitting the definition of harm to communities and violations of rights. Since the disinformation is actively spreading and affecting voters, this qualifies as an AI Incident due to the direct role of AI-generated content in causing harm.
Thumbnail Image

Taiwan election: Pro-China disinformation targets voters with deepfakes and manipulative videos

2024-01-10
WION
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfakes and manipulative videos, which are AI-generated or AI-assisted content designed to deceive voters. The disinformation campaign is actively occurring and has a direct impact on the election, which is a fundamental democratic process, thus constituting harm to communities and a violation of political rights. Since the harm is realized and ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.