Taiwan Deploys AI-Enabled Military Drones for Combat and Cognitive Warfare

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwan's National Chung-Shan Institute of Science and Technology (NCSIST) and the Ministry of National Defense are mass-producing AI-enabled military drones for reconnaissance, attack, and cognitive warfare. These drones, capable of autonomous operations and real-time data processing, are being deployed for both direct combat and psychological operations, raising significant risks of AI-driven harm in conflict scenarios.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of military drones with autonomous or semi-autonomous capabilities, which are being developed and tested. While these systems have potential for harm due to their military application, the article only discusses planned delivery, testing, and production without any actual harm or malfunction reported. Therefore, it represents a plausible future risk scenario but no realized incident. Given the credible potential for harm inherent in military AI drone systems, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsDemocracy & human autonomyTransparency & explainabilityHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)PsychologicalPublic interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisationContent generation


Articles about this incident or hazard

Thumbnail Image

「軍用商規」無人機原型機7月底交貨 明年編軍投預算量產(翻攝自國防部YOUTUBE頻道) - 自由電子報影音頻道

2023-02-07
自由時報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of military drones with autonomous or semi-autonomous capabilities, which are being developed and tested. While these systems have potential for harm due to their military application, the article only discusses planned delivery, testing, and production without any actual harm or malfunction reported. Therefore, it represents a plausible future risk scenario but no realized incident. Given the credible potential for harm inherent in military AI drone systems, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

即時快評/無人機警監系統民間都說能作 結果出人意料 | 聯合新聞網

2023-02-08
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as drone surveillance systems typically rely on AI for detection and tracking of targets. The failure of civilian-developed drone surveillance systems to detect threats constitutes a malfunction or inadequacy in AI system performance, which has direct implications for military security and operational effectiveness. The article describes realized shortcomings and testing failures, indicating harm or risk to critical infrastructure and national security. However, no actual harm or incident (such as a security breach or attack) is reported as having occurred yet. The main issue is the failure of AI-enabled systems to perform as required, and the military's decision to proceed with production despite these issues. This situation represents a plausible risk of harm due to AI system inadequacy and potential security vulnerabilities, thus fitting the definition of an AI Hazard rather than an AI Incident. The concerns about software supply chain risks (including software from mainland China) further support the classification as a hazard due to potential future harm. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI systems.
Thumbnail Image

強化不對稱戰力 3000架無人機明年起量產

2023-02-07
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems due to their autonomous or semi-autonomous military capabilities. The article details their development and planned mass production, which could plausibly lead to AI incidents involving harm to persons or communities in the future, especially given their military and attack capabilities. No current harm or incident is reported, so it is not an AI Incident. The focus is on potential future risks and security measures, fitting the definition of an AI Hazard rather than Complementary Information or Unrelated news.
Thumbnail Image

中科院組「無人機國家隊」 5款軍用機將量產

2023-02-07
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the development and upcoming production of multiple military drones by Taiwan's national drone team, which are likely AI-enabled given their military application and autonomous capabilities. The mention of the US shooting down a Chinese spy balloon and Taiwan's response indicates a context of potential military engagement involving these drones. However, no actual harm, malfunction, or misuse involving these AI systems is reported. The event is about the development and potential use of AI-enabled military drones, which could plausibly lead to harm in future conflict situations. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

台灣「無人機國家隊」成形! 中科院借鏡俄烏戰爭建立新戰力

2023-02-07
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI-enabled unmanned aerial vehicles (drones) for military purposes, which qualifies as AI systems. Although no harm or incident is reported, the article highlights the establishment of a national drone team with advanced capabilities, implying potential future use in military operations. Given the nature of military drones and their AI components, there is a credible risk that their deployment could lead to harms such as injury, disruption, or violations of rights in future conflicts. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as no actual harm has yet occurred but plausible future harm exists.
Thumbnail Image

國安會要求軍方投資無人機民廠7億元 柏鴻輝放話辦不成走人

2023-02-08
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the development and investment in military and dual-use drones, which by their nature involve AI systems for autonomous functions. Although no direct harm or incident is reported, the article discusses plans and actions that could plausibly lead to AI-related harms in the future, such as misuse of autonomous weapons or escalation of military tensions. The lack of current harm or incident excludes classification as an AI Incident. The focus is on the potential risks and internal governance issues, not on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and plausible future harm, so it is not Unrelated. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

軍用商規五款無人原型機 七月底交貨驗收 - 政治 - 自由時報電子報

2023-02-07
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, as military drones with autonomous or semi-autonomous capabilities typically incorporate AI for navigation, surveillance, and reconnaissance tasks. The event concerns the development and planned deployment of these AI-enabled systems, but no harm has yet occurred. There is no report of malfunction, misuse, or any incident causing injury, rights violations, or other harms. The article also does not present a credible imminent risk or hazard from these systems beyond their intended military use. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future risk associated with the deployment of military AI drones, but without any realized harm or incident at this stage.
Thumbnail Image

台軍用商規無人機7月交貨 嚴格排除紅色供應鏈 - 大紀元

2023-02-07
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The drones described are military-grade and likely incorporate AI systems for autonomous or semi-autonomous operation (e.g., surveillance, reconnaissance, attack drones). The article focuses on the development, selection, and planned delivery of these drones, emphasizing cybersecurity and supply chain security to prevent infiltration. No actual harm or incident is reported; rather, the article discusses the potential future deployment of AI-enabled military drones, which could plausibly lead to harms such as injury, disruption, or other military-related harms. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but no harm has yet occurred.
Thumbnail Image

面對中共威脅 台灣將加快開發軍用無人機 - 大紀元

2023-02-08
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (military drones) in a context of geopolitical tension and potential conflict. While no direct harm from these AI systems is reported, the accelerated development and deployment of military drones with AI capabilities in a contested region plausibly could lead to incidents causing injury, disruption, or other harms. The article does not report any actual incident or harm caused by these drones yet, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the main focus is on the potential threat and development of AI military systems, not on responses or updates to past incidents. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

政府全力協助無人機國家隊發展 謝金河:台灣軍工產業可以再創新局

2023-02-08
HiNet
Why's our monitor labelling this an incident or hazard?
The article centers on the development and promotion of AI-enabled drone technology in Taiwan, emphasizing government support and industry growth. While drones equipped with AI are involved, there is no mention of any harm, malfunction, or misuse that has occurred or is imminent. The content is forward-looking and promotional, without describing any AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI system development and governance responses rather than reporting an incident or hazard.
Thumbnail Image

建構不對稱戰力 台國防部加速推動無人機國家隊

2023-02-08
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned deployment of AI-enabled military drones, which are AI systems by definition due to their autonomous or semi-autonomous capabilities. While these systems have a clear potential for harm in military conflict scenarios, the article does not describe any actual harm or incidents caused by these drones. The focus is on accelerating development, ensuring security, and preventing supply chain risks. Therefore, this event represents a plausible future risk scenario related to AI military systems but does not describe an incident or realized harm. Given the credible potential for future harm from these AI-enabled military drones, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

亞洲無人機AI研發中心 翁章梁:114年打造成型 | 產經 | 中央社 CNA

2023-02-07
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article discusses the establishment and future development of an AI drone innovation center, emphasizing strategic industry growth and collaboration. While it involves AI systems (drones with AI capabilities), there is no mention of any harm, malfunction, or misuse that has occurred or is imminent. The content is about potential and planned developments, making it a description of the AI ecosystem and strategic initiatives rather than an incident or hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI-related industry development without reporting any specific AI Incident or AI Hazard.
Thumbnail Image

財金文化董事長謝金河率投資菁英拜會亞洲無人機AI創新應用研發中心 翁章梁出席致意:預計114年底將環境打造成型 | 中央社訊息平台

2023-02-07
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article centers on the development and promotion of AI-enabled drone technology and the strategic planning around it. While AI systems are involved (drones with AI capabilities), there is no mention of any harm, malfunction, or misuse that has occurred or is imminent. The content is primarily about industry development, government support, and future expectations, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting an incident or hazard.
Thumbnail Image

從俄烏戰爭獲啟示,台灣加速研發軍用無人機

2023-02-08
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the development and deployment of military drones with reconnaissance and attack capabilities, which almost certainly involve AI systems for autonomous or semi-autonomous operation. While no actual harm or incident is reported, the potential for these AI-enabled drones to cause injury, property damage, or other harms in conflict scenarios is credible and foreseeable. Hence, the event is best classified as an AI Hazard, reflecting the plausible future risk posed by these AI systems in military applications. There is no indication of realized harm or incident, nor is the article primarily about responses or updates to past incidents, so AI Hazard is the appropriate classification.
Thumbnail Image

中科院發展無人機 偵打一體、認知作戰都能用

2023-02-09
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The drones described involve AI systems for autonomous or semi-autonomous operation, real-time data processing, and cognitive warfare applications. Their use in military combat, including swarm attacks and psychological operations, directly relates to potential harm to human life and communities (harm category a and d). The deployment and use of such AI-enabled military drones constitute an AI Incident because the AI systems' use has directly led to or is intended for harm in armed conflict scenarios. The article reports on actual deployment and strategic use, not just potential risks, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

軍用商規無人機7月交貨 嚴格排除紅色供應鏈| 台灣大紀元

2023-02-07
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems because military drones typically incorporate AI for autonomous navigation, surveillance, and operational decision-making. The article explicitly mentions software development and cybersecurity testing, implying AI system involvement. The event concerns the development and planned deployment of military drones, which are AI-enabled systems with potential for significant harm if misused or compromised. However, the article does not report any realized harm or incident caused by these AI systems; rather, it discusses measures to prevent risks and ensure security. Therefore, this event represents a plausible future risk scenario related to AI-enabled military drones, qualifying as an AI Hazard. It is not an AI Incident because no harm has occurred yet, nor is it Complementary Information or Unrelated since it directly concerns AI system development with potential for harm.
Thumbnail Image

中科院組「無人機國家隊」 5款軍用機將量產-台視新聞網

2023-02-07
台視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and upcoming production of multiple military drones by Taiwan's national drone team. Military drones typically incorporate AI systems for autonomous navigation, targeting, and reconnaissance. While no direct harm or incident is reported, the deployment of such AI-enabled military systems inherently carries plausible risks of harm, including injury, disruption, or escalation in conflict. The article also references military responses to surveillance balloons but does not describe any AI-related incident or harm occurring. Thus, the event is best classified as an AI Hazard, reflecting the credible potential for future harm from these AI systems in military contexts.