Shield AI and NCSIST Collaborate to Develop AI-Enabled Autonomous Military Drones in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Shield AI and Taiwan's NCSIST have partnered to integrate Shield AI's Hivemind AI platform into Taiwanese unmanned systems, enabling autonomous mission execution and swarm coordination for military drones. The deployment of these AI-driven autonomous weapon systems raises concerns about potential future harm due to their combat capabilities and operational autonomy.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system (Hivemind) that enables autonomous decision-making and action in unmanned systems for military purposes. While no actual harm or incident is reported, the deployment of autonomous combat drones with swarm capabilities poses a credible risk of future harm, including injury, disruption, or violations of rights, given the nature of autonomous weapons. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI-enabled autonomous military drones.[AI generated]
AI principles
SafetyAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)

Severity
AI hazard

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

中科院與美商合作 賦予國造無人機自主協同作戰性能 | 聯合新聞網

2026-02-11
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Hivemind) that enables autonomous decision-making and action in unmanned systems for military purposes. While no actual harm or incident is reported, the deployment of autonomous combat drones with swarm capabilities poses a credible risk of future harm, including injury, disruption, or violations of rights, given the nature of autonomous weapons. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI-enabled autonomous military drones.
Thumbnail Image

中科院與美商Shield AI合作 建立無人機群蜂式協同戰力 | 聯合新聞網

2026-02-12
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of AI systems (Shield AI's Hivemind platform) for autonomous drone swarms capable of coordinated operations and autonomous decision-making. While the article does not report any realized harm or incidents caused by these AI systems, the nature of the technology—autonomous military drone swarms with advanced AI control—presents a credible risk of future harm, including potential injury, disruption, or violations of rights if used in conflict or other sensitive contexts. Therefore, this event qualifies as an AI Hazard due to the plausible future risks associated with the deployment of such AI-enabled autonomous weapon systems.
Thumbnail Image

中科院與Shield AI合作 加速AI飛控無人系統發展 | 聯合新聞網

2026-02-12
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system for autonomous flight control of unmanned systems with military applications. Such AI-enabled autonomous weapon systems have a high potential for misuse and could plausibly lead to significant harms, including harm to communities or violation of human rights, if used in conflict or other scenarios. Although no specific harm has yet occurred or been reported, the article clearly indicates the AI system's intended use in autonomous military operations, which plausibly could lead to AI incidents. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm from the AI system's deployment in autonomous military unmanned systems.
Thumbnail Image

中科院與Shield AI合作 加速AI飛控無人系統發展 | 政治 | 中央社 CNA

2026-02-12
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Shield AI's Hivemind) for autonomous flight control and mission execution in unmanned systems, which fits the definition of an AI system. The collaboration aims to accelerate development and deployment of these systems, which could plausibly lead to harms such as injury, disruption, or violations of rights if misused or malfunctioning, especially given the military context. However, no actual harm or incident is reported, only the development and intended deployment. Thus, it is not an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm. Hence, the classification is AI Hazard.
Thumbnail Image

Shield AI 攜手新加坡 ST Engineering,強化無人機蜂群技術

2026-02-09
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system (Hivemind) for autonomous drone swarm operations in military contexts. The AI system's use in combat and military exercises implies a credible risk of causing injury, death, or other harms associated with warfare. Since the article does not report a specific incident of harm but focuses on the collaboration and technological advancement that could plausibly lead to harm, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's role is pivotal in enabling autonomous operations in contested environments, which inherently carry risks of harm.
Thumbnail Image

中科院與美商Shield AI 合作 打造台灣無人機「AI協同作戰功能」 - 自由軍武頻道

2026-02-12
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Shield AI's Hivemind) integrated into autonomous drones with combat capabilities, including swarm coordination and autonomous mission execution. The AI system's role is central to the drones' autonomous operation, including in contested environments with GPS and communication interference. While no actual harm or incident is described, the deployment of AI-enabled autonomous weapon systems inherently carries plausible risks of causing injury, disruption, or violations of rights. The event is about the development and deployment of such systems, which fits the definition of an AI Hazard (plausible future harm). There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the AI system's military application and associated risks.
Thumbnail Image

300餘架V-BAT無人機列特別預算? 特別條例付委審查後就能明朗化 - 自由軍武頻道

2026-02-12
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The V-BAT drone is an AI-enabled autonomous or semi-autonomous system used for reconnaissance and strike missions, which qualifies as an AI system. The article discusses procurement plans and operational use but does not describe any actual harm or malfunction resulting from the AI system. While the military use of such drones inherently carries risks of harm, the article does not report any realized harm or incident. Therefore, this event represents a plausible future risk of harm due to the deployment of AI-enabled autonomous weapons, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems with potential for harm.
Thumbnail Image

中科院與美商Shield AI簽約 加速開發台灣無人機AI協同作戰功能 - 民視新聞網

2026-02-12
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Shield AI's Hivemind) designed for autonomous military drone operations, including swarm coordination and operation in contested environments. The event concerns the development and deployment of this AI system in Taiwan's defense sector. Although no actual harm or incident is reported, the nature of the AI system and its intended military use plausibly could lead to harms such as injury or disruption in future conflict scenarios. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential impacts.
Thumbnail Image

無人機「AI協同作戰功能」好用在哪裡?專家解析:這點真的很強 - 自由軍武頻道

2026-02-12
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Shield AI Hivemind platform integrated into autonomous drones) used in military drone swarm operations. However, the article does not describe any realized harm, injury, rights violations, or disruptions caused by the AI system. It focuses on the development, capabilities, and potential uses of the AI system, which could plausibly lead to future harms given the military context, but no specific incident or harm is reported. Therefore, this qualifies as an AI Hazard due to the plausible future risk of harm from autonomous military drones with AI-enabled swarm combat capabilities, but not an AI Incident. It is not Complementary Information because it is not updating or responding to a previously reported incident or hazard, nor is it unrelated as it clearly involves AI systems and their implications.
Thumbnail Image

美智慧大廠Shield AI與中科院合作 加速開發國產AI無人機

2026-02-12
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (autonomous drones with AI pilots) that could plausibly lead to significant harms if misused or malfunctioning, especially given their military application. However, the article does not describe any actual harm, malfunction, or misuse that has occurred. Therefore, this event fits the definition of an AI Hazard, as the development and deployment of such AI-enabled autonomous military drones could plausibly lead to AI incidents in the future, but no incident has yet materialized.
Thumbnail Image

軍購意見分歧藍白分手? 美示警「斬首威脅」2/軍購重中之重!無人機AI協同作戰 Shield AI與中科院合作

2026-02-12
mnews.tw
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system (Shield AI's Hivemind autonomous drone swarm platform) for military use, which is explicitly described. While no actual harm or incident is reported, the nature of the AI system—autonomous military drones capable of coordinated swarm operations—poses a credible risk of future harm, including potential injury, escalation of conflict, or other military-related harms. According to the definitions, the mere development and deployment of AI-enabled autonomous weapons systems with high potential for misuse or harm constitute an AI Hazard. Since no realized harm is described, it is not an AI Incident. The article is not primarily about responses, governance, or updates to past incidents, so it is not Complementary Information. Hence, the correct classification is AI Hazard.
Thumbnail Image

美國防科技公司Shield AI與中科院簽約 協助台灣建構無人機部隊 - 民視新聞網

2026-02-12
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system (Shield AI's Hivemind autonomous drone software) with clear military applications. Although no harm or incident has occurred yet, the autonomous military drone technology inherently carries plausible risks of causing injury, disruption, or other harms if used in conflict or misused. The article focuses on the contract and development efforts rather than any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

Shield AI Signs Contract with Taiwan's National Chung-Shan Institute of Science and Technology to Accelerate and Indigenize Taiwan-Developed AI Pilots

2026-02-11
The Manila times
Why's our monitor labelling this an incident or hazard?
The article details the development and deployment of AI systems for military unmanned aerial vehicles with autonomous capabilities. While no specific harm or incident is reported, the nature of the AI system—autonomous military drones capable of operating under GPS and communication jamming—presents a plausible risk of future harm, such as injury, disruption, or violation of rights, if these systems are used in conflict. Therefore, this event constitutes an AI Hazard due to the credible potential for harm arising from the development and deployment of these AI-enabled autonomous weapons systems.
Thumbnail Image

Taiwan to use Shield AI's Hivemind to fast-track domestic drone tech | Taiwan News | Feb. 12, 2026 14:10

2026-02-12
Taiwan News
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (Hivemind) for autonomous drone control, which is a clear AI system by definition. The use is in a military context, which inherently carries risks of harm (e.g., injury, disruption, or other harms). Since the article does not report any realized harm or malfunction but discusses the future deployment and capabilities, it fits the definition of an AI Hazard: an event where AI system use could plausibly lead to harm. There is no indication of an actual AI Incident or complementary information about responses or updates to prior incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

Military looking to drones

2026-02-12
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system for military autonomous drones, which are intended to operate in contested environments and potentially in conflict scenarios. While no actual harm or incident has occurred yet, the deployment of autonomous AI-powered military drones capable of operating under jamming conditions plausibly could lead to harms such as injury, disruption, or violations of rights in future conflict situations. Therefore, this event represents a credible potential risk associated with AI use in autonomous weapons systems, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Shield AI, Taiwan's NCSIST team up on AI drones - Focus Taiwan

2026-02-12
Focus Taiwan (CNA English News)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Hivemind) for autonomous drone operation in contested environments, indicating AI system involvement. The event concerns the development and intended deployment of AI-enabled autonomous military drones, which could plausibly lead to harms such as injury, disruption, or violations of rights if used in conflict. No actual harm or malfunction is reported, so it is not an AI Incident. The article is not primarily about responses or updates to past incidents, so it is not Complementary Information. Hence, the event is best classified as an AI Hazard due to the credible potential for future harm from the AI system's use in military drones.
Thumbnail Image

Taiwan teams up with Shield AI to develop intelligent unmanned systems | Technology

2026-02-12
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Shield AI's Hivemind platform) integrated into unmanned military drones, which are autonomous systems capable of complex decision-making. The event involves the development and intended use of AI in military applications, which could plausibly lead to harms such as injury, death, or escalation of conflict. No actual harm or incident is reported yet, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it directly concerns the development and deployment of AI systems with potential for significant harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Shield AI Signs Contract with Taiwan's National Chung-Shan Institute of Science and Technology to Accelerate and Indigenize Taiwan-Developed AI Pilots

2026-02-11
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Hivemind) designed for autonomous military drones capable of operating under contested conditions. The event concerns the development and deployment of these AI systems, which could plausibly lead to harms such as injury or death in military conflict, disruption of critical infrastructure, or escalation of conflict. Since no actual harm or incident is reported, but the potential for harm is credible and significant, this event fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with military applications and associated risks.
Thumbnail Image

Shield AI Seeks $1 Billion to Lead Global Defense Tech Surge | PYMNTS.com

2026-02-13
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and funding of AI systems for autonomous defense applications, which are AI systems by definition. Although no actual harm or incidents are reported, the nature of these AI systems—autonomous drones and vehicles capable of independent operation in combat scenarios—implies a credible risk of future harm, including injury, disruption, or violations of rights. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the deployment and use of these autonomous defense AI systems.
Thumbnail Image

Shield AI Selected as Mission Autonomy Provider for the U.S. Air Force Collaborative Combat Aircraft Program

2026-02-13
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomy software capable of making complex decisions in combat scenarios, fulfilling the definition of an AI system. The article details its development and integration into military aircraft but does not mention any incidents or harms caused by its use. Given the system's intended use in combat, there is a credible risk that its deployment could lead to injury, violation of rights, or other significant harms, fitting the definition of an AI Hazard. Since no actual harm is reported, it cannot be classified as an AI Incident. The article is not merely complementary information because it focuses on the selection and integration of the AI system with implications for future risk, rather than updates or responses to past incidents. Therefore, the appropriate classification is AI Hazard.
Thumbnail Image

Taiwan Expands AI Autonomy for Unmanned Platforms With Shield AI

2026-02-13
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled unmanned systems and AI pilots operating multiple platforms, which qualifies as AI system involvement. The development and deployment of autonomous military systems with AI capabilities inherently carry plausible risks of harm, including injury, disruption, or other significant harms if misused or malfunctioning. However, the article does not report any actual harm or incident resulting from these AI systems yet, only their development and deployment plans. Therefore, this event represents a plausible future risk associated with AI in autonomous weapons systems, classifying it as an AI Hazard rather than an AI Incident or Complementary Information.