AI Deepfakes Used in Disinformation Campaigns Targeting Taiwan's Leaders

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos and audio impersonating Taiwan's President Lai Ching-te and other officials have been disseminated online, spreading false pro-China and anti-US narratives. Evidence points to coordinated campaigns by Chinese actors using AI to undermine Taiwan's democracy and influence public opinion, raising national security concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated deepfake videos used to spread false political messages that align with a foreign government's propaganda. These videos have been disseminated to mislead the public, potentially causing harm to societal trust and democratic institutions, which fits the harm to communities category. The AI system's role in fabricating and distributing these videos is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomyRespect of human rightsAccountabilitySafety

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

國安人士揭「AI深偽頻道」 48秒輸出「疑美」、「疑軍」、「疑賴」紅色風向

2025-09-20
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to spread false political messages that align with a foreign government's propaganda. These videos have been disseminated to mislead the public, potentially causing harm to societal trust and democratic institutions, which fits the harm to communities category. The AI system's role in fabricating and distributing these videos is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

國安人士:中國AI深偽新例 變造賴總統講話帶風向

2025-09-20
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used to create fabricated videos and audio impersonations that spread false information and political disinformation. The harms include misinformation campaigns targeting democratic countries, attempts to influence elections, and systemic erosion of trust in democratic institutions, which constitute harm to communities and violations of rights. The AI system's use in generating these deepfakes is central to the incident, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident.
Thumbnail Image

中國AI深偽變造賴清德談話 國安人士:北京系統性攻擊帶風向

2025-09-20
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used to create fabricated videos and audio impersonations of political leaders, which are actively spreading false narratives and disinformation. This has already occurred and is causing harm by misleading the public, undermining democratic institutions, and potentially influencing elections. The AI system's use in generating these deepfakes is central to the harm described. The harm is realized, not just potential, and includes violations of rights and harm to communities through misinformation and manipulation. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

變造賴清德與蕭美琴對話!「AI深偽頻道」散布假訊息

2025-09-21
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used to fabricate videos of political leaders, spreading false narratives that align with foreign political agendas. This constitutes a direct harm to communities through misinformation and political manipulation, fulfilling the criteria for an AI Incident. The involvement of AI in generating deepfake videos that cause social and political harm is clear and direct, not merely potential or speculative.
Thumbnail Image

國安人士揭「AI深偽頻道」 48秒輸出「疑美」、「疑軍」、「疑賴」紅色風向 - 臺北市 - 自由時報電子報

2025-09-20
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to spread false political narratives, which is a direct use of AI systems to cause harm. The harm includes misinformation, erosion of trust in democratic institutions, and potential social destabilization, which are harms to communities and violations of rights. The AI system's role is pivotal in fabricating realistic but false content that misleads viewers. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

國安人士:中國AI深偽新例 變造賴總統講話帶風向 | 聯合新聞網

2025-09-20
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake AI for voice and video synthesis, generative AI for content creation) in malicious disinformation campaigns targeting democratic countries. The harms include violations of rights (manipulation of political discourse), harm to communities (erosion of trust in democratic institutions), and systemic societal harm. The article describes actual occurrences of AI-generated deepfakes being disseminated and used to influence elections and public opinion, fulfilling the criteria for an AI Incident. The involvement of AI is explicit and central to the harm described, and the harm is realized, not merely potential.
Thumbnail Image

國安人士:中國AI深偽新例 變造賴總統講話帶風向 | 政治 | 中央社 CNA

2025-09-20
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake videos and audio impersonations, which have been deployed in disinformation campaigns targeting democratic countries. These campaigns have already caused harm by spreading false information, undermining trust in democratic institutions, and attempting to influence election outcomes. The AI's role is pivotal in generating realistic fake content that facilitates these harms. Therefore, this qualifies as an AI Incident due to the direct and indirect harm caused to communities and democratic processes through the malicious use of AI deepfake technology.
Thumbnail Image

國安人士:中國AI深偽新例 變造賴總統講話帶風向 - Rti央廣

2025-09-20
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used to create fake videos and audio impersonations that spread false information and influence political discourse. This use of AI has directly led to harm by undermining democratic processes and spreading disinformation, which constitutes harm to communities and violations of rights. The involvement of AI in generating these deepfakes and their deployment in coordinated disinformation campaigns meets the criteria for an AI Incident, as the harm is realized and ongoing.
Thumbnail Image

總統賴清德遭AI深偽 專家籲民眾提升媒體識讀力

2025-09-20
公共電視
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create realistic but false videos and audio of political leaders, which have been disseminated to mislead the public and influence political outcomes. This constitutes harm to communities through misinformation and psychological operations, as well as potential harm to national security. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The article describes actual occurrences of these harms, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

國安人士揭AI深偽頻道 48秒輸出疑美、疑軍、疑賴紅色風向 | 政治 | 三立新聞網 SETN.COM

2025-09-20
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to fabricate political conversations and spread false narratives that align with a foreign adversary's agenda. The disinformation is actively disseminated and intended to deceive the public, which can erode trust in democratic institutions and cause systemic harm to society. The AI system's role in generating these videos is central to the harm caused. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities and violation of rights.
Thumbnail Image

新聞幕後/48秒「揚中貶台批美」AI假影片帶紅色風向 國安人士示警了 | 政治 | 三立新聞網 SETN.COM

2025-09-21
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos (deepfakes) used to spread false political narratives, which have already been disseminated and viewed by the public. The AI system's use directly leads to misinformation and potential social harm, fulfilling the criteria for an AI Incident. The harm is not just potential but ongoing, as the videos are actively shared and could influence public perception and behavior. The involvement of AI in fabricating realistic but false content is central to the incident.
Thumbnail Image

新聞幕後/從美國國務卿到台灣總統...AI頻道系統性攻擊 國安單位揪真兇 | 政治 | 三立新聞網 SETN.COM

2025-09-21
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology being used to impersonate political figures such as the US Secretary of State and Taiwan's President, with the intent to deceive and manipulate. It details actual occurrences of AI-generated disinformation campaigns causing harm to democratic institutions and public trust. The involvement of AI in generating fake audio and video content that has been disseminated and caused harm meets the criteria for an AI Incident. The harms include misinformation campaigns affecting communities and violations of rights related to political manipulation. Therefore, this event is classified as an AI Incident.
Thumbnail Image

AI變造賴清德.蕭美琴對話 稱買美軍武「撈錢」

2025-09-21
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create synthetic videos of political leaders. The AI-generated content is used to spread false narratives that align with hostile foreign propaganda, directly causing harm to communities by misleading the public and undermining democratic processes. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and potential national security risks. The discussion of legal and technical responses further supports the recognition of realized harm rather than just potential risk.
Thumbnail Image

正副總統對話遭偽造 立委籲國安調查 | 王定宇 | AI | 大紀元

2025-09-21
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos (deepfakes) of political leaders spreading false statements that damage social cohesion and trust between democratic allies. This is a direct use of AI systems to create misleading content that harms communities and violates rights by manipulating political discourse. The harm is realized as the videos are publicly disseminated and influence public opinion, meeting the criteria for an AI Incident. The involvement of AI in the creation of the fake videos and the resulting societal harm justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

正副總統對話遭偽造 立委籲國安調查| 台灣大紀元

2025-09-21
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The video involves AI systems capable of generating realistic fake images and audio (deepfakes) of political figures. The fabricated content is used to spread false narratives that harm communities by fostering internal hatred and distrust, which constitutes harm to communities and a violation of rights. The event describes actual dissemination of such harmful AI-generated content, not just a potential risk, thus qualifying as an AI Incident under the framework.