China’s AI Surveillance App and Taiwan Election Disinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Beijing’s “China Consular” app compels overseas nationals to upload personal and travel details under the guise of consular services, enabling cross‐border monitoring and intimidation. Concurrently, China‐linked actors leveraged generative AI and fake accounts to spread tailored disinformation in Taiwan’s presidential poll, deepening divisions and eroding trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (generative AI) being used to produce and disseminate false information that has already influenced elections, such as the Taiwan presidential election, which is a direct harm to political communities and democratic processes. The involvement of AI in generating and spreading disinformation that undermines election legitimacy and trust in media constitutes a violation of rights and harm to communities. The article also references specific AI-driven misinformation campaigns linked to state actors, confirming the AI system's role in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyRobustness & digital securityHuman wellbeingPrivacy & data governance

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
Public interestReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

全球2024選舉年 AI時代的政治面臨壓力測試

2024-01-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI) being used to produce and disseminate false information that has already influenced elections, such as the Taiwan presidential election, which is a direct harm to political communities and democratic processes. The involvement of AI in generating and spreading disinformation that undermines election legitimacy and trust in media constitutes a violation of rights and harm to communities. The article also references specific AI-driven misinformation campaigns linked to state actors, confirming the AI system's role in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

民團:中國將持續扮演台灣選後內部衝突放大者

2024-01-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and fake accounts by Chinese actors to spread misinformation and manipulate public opinion in Taiwan's elections. This manipulation has directly led to harm by exacerbating social conflicts and spreading false narratives, which undermines democratic rights and harms community trust. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

01/19 各報重點新聞一覽 - 生活 - 自由時報電子報

2024-01-18
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content recommendation and moderation. The lawsuit claims that these AI-driven recommendations have directly led to children being exposed to harmful content, causing injury or harm to their health. This constitutes an AI Incident because the AI system's use has directly led to harm to a group of people (children) through exposure to inappropriate and harmful content. Therefore, the event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

吳釗燮︰未來資訊操弄 只會變本加厲 - 政治 - 自由時報電子報

2024-01-20
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article centers on the threat of information manipulation and cognitive warfare, mentioning new technologies that could exacerbate these issues. While AI systems are likely involved in such information operations, the article does not specify any particular AI system, incident, or malfunction causing harm. It mainly provides a strategic and societal perspective on the risks and the need for collective efforts to safeguard democracy. Therefore, it fits best as Complementary Information, offering context and highlighting potential future risks without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

台灣民主實驗室:中國選前對台資訊操作 擴大爭議 - 政治 - 自由時報電子報

2024-01-19
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event describes coordinated information manipulation involving short videos with highly similar scripts and materials, which suggests the use of AI or algorithmic systems for content generation or dissemination. The harm is realized as these operations amplify social conflicts and spread false rumors during an election, impacting community cohesion and democratic processes. Although AI is not explicitly named, the nature of the coordinated content production and dissemination aligns with AI system involvement. Hence, this qualifies as an AI Incident due to direct harm to communities through disinformation.
Thumbnail Image

認知作戰已非新聞 吳釗燮:威權國家威脅民主社會的開放多元 - 政治 - 自由時報電子報

2024-01-20
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions information manipulation and cognitive warfare as real and ongoing issues affecting democratic elections, with new technologies aiding these efforts. Given the nature of modern information operations, it is reasonable to infer the involvement of AI systems in generating or amplifying misinformation and cognitive attacks. The harms described include threats to democracy, misinformation, and erosion of trust, which constitute harm to communities and violations of rights. Since these harms are occurring and linked to AI-enabled information manipulation, this qualifies as an AI Incident.
Thumbnail Image

民間團體:中國將扮演台灣選後內部衝突放大者 內容加入AI製作 - 政治 - 自由時報電子報

2024-01-19
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) to produce and disseminate manipulated information that has directly led to harm by influencing public opinion and exacerbating social conflicts in Taiwan during an election period. The AI-generated content is used as a tool for information manipulation, which is a violation of rights and causes harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly contributed to realized harm in the form of misinformation and social disruption.
Thumbnail Image

認知作戰已非新聞 吳釗燮:威權國家威脅民主社會的開放多元 - 自由電子報影音頻道

2024-01-20
自由時報
Why's our monitor labelling this an incident or hazard?
While the article addresses information manipulation and cognitive warfare that could plausibly involve AI technologies (e.g., AI-generated misinformation or deepfakes), it does not describe a specific AI system causing direct or indirect harm at this time. The discussion is about the general threat and the need for resilience and cooperation among democratic societies. Therefore, this is best classified as Complementary Information, as it provides context and highlights governance and societal responses to AI-related threats without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

民團:中國將持續扮演台灣選後內部衝突放大者 | 聯合新聞網

2024-01-19
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and fake accounts by Chinese actors to spread misinformation and manipulate public opinion in Taiwan's election, which has directly led to social polarization and misinformation harm. This fits the definition of an AI Incident because the AI system's use in generating false content and coordinating information operations has directly contributed to harm to communities and violations of rights. Therefore, the event is classified as an AI Incident.
Thumbnail Image

境外AI假訊息來襲 台民團:可能朝三方向邁進 - 大紀元

2024-01-19
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI creating fake accounts and content) in the active manipulation of information during an election, which has directly led to harm to communities by spreading misinformation and undermining democratic processes. This fits the definition of an AI Incident because the AI system's use has directly caused harm through misinformation and social disruption. The article does not merely warn of potential harm but describes ongoing and realized harm from AI-generated disinformation campaigns.
Thumbnail Image

法國報紙摘要 - 法媒: 中國在沙漠中訓練打擊美國航母

2024-01-21
RFI
Why's our monitor labelling this an incident or hazard?
The article describes China's use of a large-scale aircraft carrier model in the desert for missile testing and simulated targeting, which likely involves AI-enabled military systems for guidance, simulation, or laser rangefinding. This use of AI in military training and weapons testing could plausibly lead to harm if these capabilities are used in conflict, thus constituting an AI Hazard. There is no report of actual harm or malfunction caused by AI systems, so it is not an AI Incident. The virus development is a biosecurity concern but does not explicitly involve AI. Other parts of the article are unrelated to AI. Hence, the event is best classified as an AI Hazard due to the plausible future harm from AI-enabled military applications.
Thumbnail Image

中使館要求海外公民登記 分析:假服務真監控 | 中國領事app | 海外110 | 跨國鎮壓 | 新唐人电视台

2024-01-19
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The Chinese consulate's app is an AI system or at least a data processing system that collects personal and travel information from overseas Chinese citizens. The use of this system for surveillance and cross-national repression has directly led to violations of human rights and threats to personal safety, as documented by legal experts and human rights organizations. The event describes realized harm through surveillance, intimidation, and potential illegal detention, which are direct harms caused by the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

民團:中國將持續扮演台灣選後內部衝突放大者 | 政治 | 中央社 CNA

2024-01-19
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and fake accounts by Chinese actors to spread disinformation and manipulate Taiwanese election-related discourse. This manipulation has already occurred, influencing societal polarization and public perception, which qualifies as harm to communities. The AI system's role in generating and disseminating false information is pivotal to the incident. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-enabled disinformation campaigns.
Thumbnail Image

研究機構指中國續成為成對台假信息供應者 分析:手法更新穎並加入AI製作

2024-01-19
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate false information and disinformation that influenced the Taiwan election, which constitutes harm to communities by disrupting democratic processes and spreading falsehoods. The involvement of AI in producing and amplifying these false narratives is direct and has already led to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

全球 2024 選舉年,AI 時代的政治面臨壓力測試

2024-01-22
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate and disseminate false information that has already impacted elections and political discourse, directly causing harm to communities and violating rights. This fits the definition of an AI Incident because the AI's use has directly led to significant harm. The article also includes information about governance and societal responses, but the primary focus is on the realized harm from AI-driven disinformation campaigns, not just complementary information or potential future harm.