Taiwan Plans Massive Procurement of Nearly 50,000 AI-Enabled Military Drones

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwan's Ministry of National Defense announced plans to procure 48,750 military drones between 2026 and 2027, involving multiple domestic suppliers. These AI-enabled drones, including attack and reconnaissance models, are intended to strengthen asymmetric warfare capabilities, raising concerns about potential future harm from large-scale military AI deployment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI integration (Auterion AI) in military drones intended for precision strikes, which qualifies as an AI system. The event concerns the announcement and preparation for a large procurement of these drones, not an incident causing harm. However, the scale and military nature of the drones imply a plausible risk of future harm (injury, disruption, or rights violations) from their use. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyAccountabilityRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

焦點股》雷虎:5萬架軍用商規無人機標案 價量齊揚 - 自由財經

2025-07-24
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI integration (Auterion AI) in military drones intended for precision strikes, which qualifies as an AI system. The event concerns the announcement and preparation for a large procurement of these drones, not an incident causing harm. However, the scale and military nature of the drones imply a plausible risk of future harm (injury, disruption, or rights violations) from their use. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

學者:台反無人機能量不足 應完善戰術及定點防護網 | 聯合新聞網

2025-07-24
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use and enhancement of AI recognition capabilities in anti-drone defense systems, which are AI systems designed to detect and intercept drones. The discussion centers on the current insufficiency and the potential for harm if drone threats are not effectively countered, including injury and damage to military personnel and critical infrastructure. Although no specific incident of harm is reported, the article highlights a credible risk of harm from drone attacks and the need to improve AI-enabled defenses to prevent such harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm if the AI systems are not adequately developed or deployed.
Thumbnail Image

軍備局採購新一波5款軍用商規無人機 明年起買4萬8千架 | 聯合新聞網

2025-07-23
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the planned acquisition of military drones, which are AI-enabled systems due to their autonomous or semi-autonomous capabilities such as navigation, control, and data transmission. Although no harm has yet occurred, the large-scale procurement of such drones for military purposes plausibly leads to potential harms including injury, disruption, or violations of rights if these drones are used in conflict or surveillance. Therefore, this event represents a credible future risk associated with AI systems, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

山寨?大陸疑仿美國技術 推新型垂直起降無人機 | 大陸政經 | 兩岸 | 經濟日報

2025-07-21
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article involves an AI system integrated into a drone used for disaster relief, which qualifies as an AI system. However, there is no direct or indirect harm reported, nor any plausible future harm explicitly stated. The focus is on the development and deployment of the AI-enabled drone and its capabilities, including autonomous flight and swarm operation, but no incident or hazard is described. Therefore, this is best classified as Complementary Information, providing context on AI system development and deployment without reporting an AI Incident or AI Hazard.
Thumbnail Image

500億元軍購啟動!無人機部隊起飛 攸泰鎖漲停搶鋒頭 | 市場焦點 | 證券 | 經濟日報

2025-07-24
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI-enabled military drones, which are inherently capable of causing harm if misused or malfunctioning, thus constituting a plausible AI Hazard. Since no actual harm or incident has been reported, and the article mainly discusses the procurement and deployment plans, it does not meet the criteria for an AI Incident. It is not merely complementary information because the focus is on the initiation of a significant military AI system procurement with potential future risks. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

國防部將啟動史上最大無人機標案 中光電雷虎續強 | 產業熱點 | 產業 | 經濟日報

2025-07-24
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems integrated into military drones designed for attack missions, which clearly fits the definition of an AI system. The article does not report any realized harm or incident but highlights the initiation of a large-scale procurement and deployment of AI-enabled military drones. Given the potential for these systems to cause injury, harm to communities, or disruption if used in conflict or malfunctioning, this event plausibly leads to AI incidents in the future. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are explicitly involved and the potential for harm is credible.
Thumbnail Image

強化不對稱戰力 軍備局2年內將採購近5萬架無人機 | 政治 | 中央社 CNA

2025-07-23
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the procurement of various types of unmanned aerial vehicles with autonomous features, which qualify as AI systems. The event concerns the planned acquisition and deployment of these AI systems for military purposes, which inherently carry risks of harm such as escalation of conflict, disruption of regional peace, and potential injury or loss of life. Since the article does not report any realized harm but highlights the scale and capabilities of these AI-enabled drones in a tense geopolitical context, it fits the definition of an AI Hazard. The event plausibly could lead to AI Incidents in the future, but no direct or indirect harm has yet been reported.
Thumbnail Image

學者:台反無人機能量不足 應完善戰術及定點防護網 | 政治 | 中央社 CNA

2025-07-24
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use and planned enhancement of AI recognition capabilities in anti-drone defense systems, which are intended to detect and intercept hostile drones. The current insufficiency of these AI-enabled systems and the ongoing threat from drone incursions constitute a credible risk that could plausibly lead to harm, such as damage or casualties in military or critical infrastructure contexts. Since no actual harm has been reported yet, but the risk is credible and foreseeable, this event fits the definition of an AI Hazard rather than an AI Incident. The focus is on potential future harm due to insufficient AI-enabled defense capabilities against drones.
Thumbnail Image

國防部將啟動史上最大無人機標案 中光電雷虎續強 | 證券 | 中央社 CNA

2025-07-24
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered military drones designed for precise attack missions, indicating the presence of AI systems in autonomous or semi-autonomous weaponized drones. While no actual harm or incident is reported, the scale and nature of the procurement (nearly 50,000 drones with AI strike capabilities) present a credible risk of future harm, including injury or disruption. The event concerns the development and intended use of AI systems with lethal potential, fitting the definition of an AI Hazard. There is no indication of realized harm or incident, so it is not classified as an AI Incident. It is more than general AI news or complementary information because it highlights a significant military AI deployment with plausible future risks.
Thumbnail Image

軍備局徵求5款無人機 首曝未來2年國軍需求量達4萬8750架--上報

2025-07-23
upmedia.mg
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the procurement of multiple types of unmanned drones with autonomous or semi-autonomous capabilities, including attack drones capable of carrying explosives. These drones qualify as AI systems due to their autonomous navigation, control, and mission execution functions. Although no harm has yet occurred, the scale and nature of the procurement (armed drones) plausibly could lead to AI incidents involving injury, harm to communities, or other significant harms. The event is about the solicitation and planned acquisition, not about an incident or realized harm, so it is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the procurement and potential future use of AI-enabled military drones with inherent risks.
Thumbnail Image

軍用商規第二輪,國防部明後年計畫採購近五萬架無人機

2025-07-23
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves the planned large-scale acquisition of military drones, which almost certainly incorporate AI systems for navigation and control. However, the article only describes the procurement plan and specifications without any mention of harm, malfunction, or misuse. Since the drones are intended for military use and the scale is significant, there is a plausible risk of future harm (e.g., misuse, accidents, escalation), qualifying this as an AI Hazard rather than an Incident. There is no indication of realized harm or ongoing incident, so it cannot be classified as an AI Incident. It is more than just general AI news or product launch because of the scale and military context implying potential future risks.
Thumbnail Image

尾盤急拉!雷虎、漢翔、銘旺科準備好了!國防部釋單打開無人機黃金年代 | yam News

2025-07-23
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses military drones, which almost certainly incorporate AI systems for autonomous operation and image recognition. Although no harm or incident is reported, the large-scale procurement and deployment of these AI-enabled military drones plausibly could lead to harms such as accidents, misuse, or escalation of military conflict. Since no actual harm has occurred yet, but the potential is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are reasonably inferred to be involved in these military drones.
Thumbnail Image

強化不對稱戰力 軍備局2年內將採購近5萬架無人機 - Rti央廣

2025-07-23
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The procurement involves unmanned aerial vehicles (drones) which are AI-enabled or at least highly likely to incorporate AI systems for autonomous or semi-autonomous operation such as navigation, control, and surveillance. The event concerns the development and acquisition of military AI-enabled systems with potential for use in conflict scenarios, which could plausibly lead to harm including injury, disruption, or violations of rights in the context of military conflict. Although no harm has yet occurred, the scale and nature of the procurement imply a credible risk of future harm related to AI-enabled military systems. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as it concerns plausible future harm from AI systems rather than realized harm.
Thumbnail Image

強化不對稱戰力 明後年國軍大買近5萬架無人機| 台灣大紀元

2025-07-23
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the planned acquisition and deployment of AI-enabled military drones, which are AI systems by definition due to their autonomous or semi-autonomous capabilities in surveillance, targeting, and data transmission. The article does not report any realized harm or incident but highlights the scale and intent of military use, which could plausibly lead to injury, disruption, or other harms associated with armed conflict. Hence, it fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident in the future.
Thumbnail Image

強化不對稱戰力 軍備局明後年買近5萬架無人機| 台灣大紀元

2025-07-23
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the procurement of thousands of military drones with advanced capabilities that imply AI involvement (e.g., autonomous navigation, control, surveillance). While no harm has yet occurred, the scale and military nature of these AI systems present a credible risk of future harm, such as injury or disruption in conflict scenarios. The event is about planned acquisition and production, not about an incident or harm already realized. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

國軍明年要「爆買」近5萬架無人機!國防部將啟動500億元採購案|壹蘋新聞網

2025-07-24
Nextapple
Why's our monitor labelling this an incident or hazard?
The event involves the planned acquisition of a large fleet of military drones, which are AI systems or at least AI-enabled systems given their autonomous or semi-autonomous capabilities. While no harm has yet occurred, the scale and nature of these drones imply a credible risk of future harm, such as injury, disruption, or violations of rights in conflict scenarios. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents in the future. There is no indication of current harm or incident, so it is not an AI Incident. It is more than just complementary information because it highlights a significant procurement with potential for harm, not merely an update or governance response.
Thumbnail Image

無人機威脅日增 學者:台反制能量不足須補強 
 | 國防院 | 防護網 | 台灣大紀元 | 大紀元

2025-07-24
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly through the mention of AI recognition systems as part of proposed multi-modal drone detection and interception capabilities. The article does not report any realized harm but emphasizes the plausible future harm from drone threats that could be mitigated by improved AI-enabled defenses. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm stemming from the current insufficiency in AI-based anti-drone systems and the potential for future incidents if these gaps are not addressed.
Thumbnail Image

積極對中國備戰!目標達到5萬架 國軍將大採購5型無人戰鬥機 | 政治 | Newtalk新聞

2025-07-24
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI-enabled unmanned combat drones by Taiwan's military, which are AI systems by definition due to their autonomous or semi-autonomous operational capabilities. While the article does not report any realized harm, the large-scale deployment of such military AI systems plausibly could lead to harms such as injury, disruption of critical infrastructure, or escalation of conflict. The mention of China's export controls further underscores the geopolitical sensitivity and potential for future incidents. Since no actual harm has yet occurred, but the potential is credible and significant, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

【華府訪談】反無人機策略專家:中國製無人機威脅美國安全 | 美國國防部 | 無人機技術 | 軍用無人機 | 新唐人电视台

2025-07-24
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as drones with autonomous or remotely operated capabilities rely on AI for navigation, payload delivery, and surveillance. The discussion centers on the plausible future harms these AI-enabled drones could cause, such as military attacks, espionage, privacy violations, and cyberattacks. Since no actual harm or incident is described but credible risks are emphasized, this qualifies as an AI Hazard rather than an AI Incident. The focus is on the potential threat to national security and critical infrastructure from the use and proliferation of AI-powered drones, consistent with the definition of an AI Hazard.
Thumbnail Image

無人機500億採購案!雷虎、中光電、邑錡、昶瑞機電往上攻 | yam News

2025-07-24
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems reasonably inferred to be integrated into military drones for autonomous or semi-autonomous functions. The procurement of such drones with offensive capabilities (e.g., suicide attack drones) represents a credible potential for future harm, including injury, disruption, or violations of rights, if these systems are deployed or misused. However, since no actual harm or incident has occurred yet, and the article is primarily about the procurement announcement and market impact, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

無人機威脅日增 學者:台反制能量不足須補強| 台灣大紀元

2025-07-24
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI recognition systems as part of the proposed multi-sensor detection network to identify and counter drones. The threat posed by drones with anti-jamming capabilities and the inability to effectively repel them currently indicates a plausible risk of harm to critical infrastructure and military assets. Although no specific harm has been reported in this article, the described drone intrusions and the insufficiency of current defenses imply a credible risk that could plausibly lead to injury, disruption, or harm. Therefore, this event qualifies as an AI Hazard because it involves the plausible future harm from AI-enabled drone threats and the need for AI-based countermeasures to mitigate these risks.
Thumbnail Image

不只「微型無人機」 國軍這二款新購軍用商規無人機也各有「模板」 - 自由軍武頻道

2025-07-24
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The article focuses on the military's acquisition and deployment of AI-enabled unmanned drones, which are AI systems by definition due to their autonomous or semi-autonomous capabilities. However, there is no mention of any injury, violation of rights, disruption, or other harms caused by these systems. The content is about planned procurement and demonstration, which could imply potential future risks but does not describe any realized harm or incident. Therefore, this event is best classified as an AI Hazard because the deployment of large numbers of military drones with AI capabilities could plausibly lead to harm in the future, but no harm has yet occurred or been reported.
Thumbnail Image

2階段精準反制無人機 美軍用AI輔助 更省成本 | 聯合新聞網

2025-08-11
UDN
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used in military defense against drones, specifically mentioning AI-assisted autonomous interception systems. However, it does not report any realized harm or incident caused by these AI systems. Instead, it discusses ongoing development and testing, implying potential future use and benefits. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to incidents in conflict scenarios, but no direct or indirect harm has yet occurred as described.
Thumbnail Image

國軍演習欠缺這最要命的一項:偵測、干擾並擊落無人機 | 國際焦點 | 國際 | 經濟日報

2025-08-11
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article focuses on the use and development of AI-enabled counter-drone systems in military exercises, highlighting their detection, jamming, and shooting capabilities. While these systems involve AI and have potential implications for military conflict, the article does not describe any realized harm or incidents resulting from their use. Therefore, it does not qualify as an AI Incident. It also does not present a direct warning or credible risk of future harm from these systems themselves beyond their intended military use, so it is not an AI Hazard. The article provides complementary information about AI systems in military defense contexts, their capabilities, and ongoing exercises, which fits the definition of Complementary Information.
Thumbnail Image

名家論壇》楊威利/台灣大筆增購五萬架無人機 | 名家論壇 | 要聞 | NOWnews今日新聞

2025-08-10
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the procurement and deployment of military drones, which are AI systems capable of autonomous or semi-autonomous reconnaissance and potentially attack functions. While no actual harm or incident is reported, the article clearly outlines the potential for these systems to be used in conflict scenarios that could cause significant harm. The discussion of large-scale drone deployment, including the possibility of drones directly attacking targets, indicates a credible risk of future AI-related harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents involving injury, disruption, or harm to communities in a military conflict context.
Thumbnail Image

無人機擬納新兵必學技能 學者建議採分級訓練模式

2025-08-11
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of military drones with autonomous or semi-autonomous capabilities. The focus is on the planned acquisition and training for their use, not on any realized harm or incident. The potential for these AI-enabled drones to be used in military conflict, including surveillance and firepower roles, implies a plausible risk of harm to persons or communities. Since no actual harm has occurred yet, but the risk is credible and foreseeable, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

從士兵到將軍都要會 馬防部中將指揮官劉慎謨帶頭學無人機操作 - 自由軍武頻道

2025-08-11
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves the use and training of unmanned drones, which likely incorporate AI systems for operation and control. However, the article does not report any actual or potential harm caused by these AI systems. There is no indication of injury, rights violations, infrastructure disruption, or other harms. The content is primarily about military training and capability enhancement, which constitutes complementary information about AI system deployment and use rather than an incident or hazard.
Thumbnail Image

【影】憲兵將購3套M-ACE反無人機系統 強化首府周邊戍衛效能 - 自由軍武頻道

2025-08-08
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves the planned purchase and deployment of an AI-enabled counter-drone system (M-ACE) designed to counter drone threats, which implies the presence of AI systems for detection and response. However, no actual harm or incident has occurred yet; the article focuses on the intended use and procurement process. This fits the definition of an AI Hazard, as the system's use could plausibly lead to harm in the future (e.g., kinetic countermeasures causing damage or escalation), but no incident has materialized. Therefore, the classification is AI Hazard.