Anduril's AI-Powered Drones and Weapons Fail in Tests and Combat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anduril Industries' AI-driven drones and autonomous weapons have repeatedly malfunctioned during U.S. military tests and combat deployments, including in Ukraine. Failures include drone crashes, vulnerability to electronic warfare, loss of control of unmanned boats, and a fire caused by an anti-drone system, resulting in operational disruptions and property damage.[AI generated]

Why's our monitor labelling this an incident or hazard?

The drones described are AI systems as they perform autonomous flight, surveillance, and strike tasks. The crashes are malfunctions of these AI systems, directly leading to loss of property and operational disruption, which fits the definition of harm. The article reports actual incidents of drone failures, not just potential risks, so this is an AI Incident rather than a hazard. The involvement of AI in the drones' autonomous functions and the resulting crashes causing harm to military assets and potentially to broader military effectiveness justify classification as an AI Incident.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
Government

Harm types
Economic/Property

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Exclusive-US defense firm Anduril faces setbacks from drone crashes

2025-11-27
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The drones are AI systems as they perform autonomous or semi-autonomous surveillance and strike functions. The crashes are malfunctions during use, indicating a failure of the AI systems. While no direct harm has occurred, the crashes demonstrate a credible risk of harm if such failures happen in combat or operational environments. The article focuses on the failures and their implications for battlefield readiness, not on realized harm. Therefore, this event fits the definition of an AI Hazard, as the malfunction could plausibly lead to injury, property damage, or other harms in the future.
Thumbnail Image

Exclusive-US Defense Firm Anduril Faces Setbacks From Drone Crashes

2025-11-27
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous flight, surveillance, and strike tasks. The crashes are malfunctions of these AI systems, directly leading to loss of property and operational disruption, which fits the definition of harm. The article reports actual incidents of drone failures, not just potential risks, so this is an AI Incident rather than a hazard. The involvement of AI in the drones' autonomous functions and the resulting crashes causing harm to military assets and potentially to broader military effectiveness justify classification as an AI Incident.
Thumbnail Image

Anduril's Autonomous Systems Plagued by Test Failures

2025-11-28
Chosun.com
Why's our monitor labelling this an incident or hazard?
The autonomous systems described clearly involve AI, as they perform complex autonomous operations such as navigation, target engagement, and threat interception. The reported malfunctions and failures have directly caused harm or risk to personnel safety, property (fire damage), and operational effectiveness, fulfilling the criteria for an AI Incident. The involvement of AI system malfunction in causing these harms is explicit and direct, not merely potential or speculative.
Thumbnail Image

U.S. defence firm Anduril faces setbacks from drone crashes

2025-11-29
The Hindu
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems due to their autonomous flight and operational capabilities. The crashes and malfunctions represent failures of these AI systems during use, directly leading to harm or risk of harm in military operations. The article reports actual incidents of drone crashes and operational failures, not just potential risks, thus meeting the criteria for an AI Incident rather than an AI Hazard or Complementary Information. The harms include potential injury, disruption of military operations, and loss of property (drones).
Thumbnail Image

Exclusive-US defense firm Anduril faces setbacks from drone crashes

2025-11-27
ThePrint
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems with autonomous or semi-autonomous capabilities used in warfare. The article details actual crashes and failures during tests and combat, which constitute malfunctions leading to harm (damage to property, disruption of military operations). The involvement of AI in these incidents is explicit, and the harm is realized rather than potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Defense Firm Anduril Faces Setbacks From Drone Crashes

2025-11-28
NewsMax
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems used for autonomous flight and military operations. The crashes and malfunctions represent failures or malfunctions of these AI systems, directly causing harm in terms of property loss and operational disruption. The article details specific incidents of drone crashes during tests and combat, confirming realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident due to the direct link between AI system malfunction and harm.
Thumbnail Image

Anduril's Drone Missteps: A Launchpad for Defense Tech Evolution | Technology

2025-11-27
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The drones mentioned (Altius and Ghost) are AI systems used for autonomous or semi-autonomous military operations, so AI system involvement is clear. The failures during military tests indicate malfunctions or performance issues in AI use. While no actual harm or damage is reported, the potential for harm in military contexts (e.g., mission failure, safety risks) is credible. Thus, this event represents an AI Hazard rather than an Incident. The article focuses on challenges and ongoing improvements rather than realized harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

The High-Flying Ambitions and Setbacks of Anduril's Military Drones | Technology

2025-11-28
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The drones are AI systems used for military surveillance, and their malfunction during testing (plummeting 8,000 feet) shows a failure in the AI system's operation. While the article mentions setbacks and reliability concerns, it does not describe any actual injury, damage, or violation caused by these drones. The deployment in conflict zones and the testing failures suggest a credible risk that these AI systems could lead to harm if malfunctions occur during active operations. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no harm has yet been reported.
Thumbnail Image

Are Anduril's autonomous weapons systems up to the mark?

2025-11-29
NewsBytes
Why's our monitor labelling this an incident or hazard?
The drones are autonomous weapons systems, which by definition involve AI for navigation and targeting. Their crashes and failure to hit targets represent malfunctions of AI systems in a critical infrastructure context (military operations). The harm here is indirect but significant, as malfunctioning weapons can cause unintended damage or mission failure. Since the drones were pulled from service due to these issues, the harm has materialized. Therefore, this qualifies as an AI Incident due to malfunction leading to harm in a critical infrastructure domain (military).
Thumbnail Image

U.S. Tests Reveal Crashes of Anduril's Altius Drones Amid Rising Scrutiny - EconoTimes

2025-11-28
EconoTimes
Why's our monitor labelling this an incident or hazard?
The drones are AI systems with autonomous capabilities. Their crashes during military tests are malfunctions of these AI systems. The harm includes destruction of military property and undermining of defense capabilities, which fits harm to property and potential harm to critical infrastructure management. The article reports actual crashes, not just potential risks, so this is a realized harm, not merely a hazard. Hence, the event is classified as an AI Incident.
Thumbnail Image

Anduril Faces Repeated Military Drone Crashes During Tests

2025-11-28
Technology Org
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous navigation and reconnaissance tasks. The crashes and malfunctions during testing and combat represent AI system malfunctions leading to harm, including disruption of military operations and potential risk to personnel. The article details actual incidents of drone crashes, not just potential risks, thus constituting an AI Incident. The harm is indirect but material, as the failures impact military effectiveness and safety. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US drone maker Anduril hit by test failures | News.az

2025-11-28
News.az
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems performing autonomous or semi-autonomous surveillance and strike tasks. The crashes during tests and military exercises represent malfunctions of these AI systems. The harm includes damage to expensive military property and potential risks to operational effectiveness and safety. The article reports actual incidents of drone crashes, not just potential risks, so this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WSJ: Anduril's weapons systems have failed during several tests

2025-11-28
Sherwood News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered autonomous drones and command and control software developed by Anduril, which have malfunctioned during tests and deployment. The drone crash caused a large fire, a direct harm to property and safety. The shutdown of drone ships during Navy tests created hazards for other vessels, indicating direct or indirect harm. The vulnerabilities to jamming and battlefield removal also indicate operational failures impacting military effectiveness and safety. These are direct harms caused by the AI systems' malfunction or failure, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anduril's autonomous weapons stumble in tests and combat, WSJ reports

2025-11-28
News Flash
Why's our monitor labelling this an incident or hazard?
The autonomous weapons systems described are AI systems by definition, as they perform autonomous operations and decision-making. The reported incidents include direct harms: safety violations with potential loss of life during Navy exercises, a fire caused by the counterdrone system damaging property and environment, and combat failures causing operational harm. These constitute injury or harm to persons or groups (potential loss of life), harm to property and environment (fire), and disruption of military operations. Hence, this qualifies as an AI Incident due to direct harms caused by the development and use of AI systems.
Thumbnail Image

Anduril faces setbacks from drone crashes as it sends drones to Ukraine

2025-11-27
The Columbus Dispatch
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems performing autonomous or semi-autonomous functions such as surveillance and strike missions. The crashes and failures during tests and combat deployments indicate malfunctions of these AI systems. The harm includes loss of expensive military equipment and potential negative effects on military effectiveness and conflict outcomes, which can be considered harm to property and communities. The article reports actual incidents of drone crashes and operational failures, not just potential risks, thus meeting the criteria for AI Incidents rather than AI Hazards or Complementary Information.
Thumbnail Image

Anduril showcases connected defense to deter Russia on NATO's eastern flank

2025-11-26
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous drones, AI-enabled targeting, AI-powered data fusion) used in military defense. However, it does not report any realized harm, malfunction, or misuse resulting from these AI systems. Instead, it discusses the deployment and integration of these systems as part of NATO's defense strategy and the geopolitical and ethical debates surrounding their use. This fits the definition of Complementary Information, as it provides context and updates on AI's role in defense without describing a specific AI Incident or AI Hazard.
Thumbnail Image

'AI방산기업' 안두릴, 실전·훈련서 기술결함 연이어 노출 | 연합뉴스

2025-11-27
연합뉴스
Why's our monitor labelling this an incident or hazard?
The incidents described involve AI systems developed and used by Anduril, including autonomous vessels and drones controlled by AI software. The malfunctions and failures directly led to safety hazards such as a fire, operational disruptions, and potential risks to human life and military effectiveness. The AI system's malfunction and use are central to these harms, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

추락에 오작동까지...美 방산 기대주 '안두릴' 치명적 결함 속속 [지금이뉴스]

2025-11-28
YTN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems developed by Anduril malfunctioning during military exercises and operations, leading to safety incidents including uncontrolled vessel stops, collisions, a large fire, and operational failures of drones. These malfunctions directly led to harm or significant risk of harm to people, property, and critical military infrastructure. The AI system's involvement is clear and central to the incidents, fulfilling the criteria for AI Incidents as the AI system's malfunction directly caused or contributed to the harms described.
Thumbnail Image

美 'AI 방산기업' 안두릴, 실전·훈련서 기술 결함 연이어 발생

2025-11-28
핀포인트뉴스
Why's our monitor labelling this an incident or hazard?
The incidents involve AI systems developed and used by Anduril, including autonomous unmanned vessels and drones controlled by AI software. The malfunctions and vulnerabilities have directly led to safety hazards, property damage (fire), operational disruption (military exercises halted), and potential risk to human life. The AI system's malfunction and use in critical defense contexts have caused realized harms, not just potential risks. Therefore, these events qualify as AI Incidents under the OECD framework.
Thumbnail Image

美방산테크 안두릴 잇단 기술결함...손 맞잡은 K-방산 문제없나 | 중앙일보

2025-11-28
중앙일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based autonomous navigation and control software ('Lattice') used in unmanned combat vessels and drones that malfunctioned, causing operational failures and safety hazards. The malfunctions led to vessels drifting uncontrollably and drones crashing, which are direct harms to property and potential risks to human life. The involvement of AI systems in these malfunctions is clear, and the harms have materialized, not just potential. Hence, this fits the definition of an AI Incident, as the AI system's malfunction has directly led to harm and operational disruption in critical infrastructure (military defense systems).
Thumbnail Image

美방산테크 안두릴 잇단 기술결함...손 맞잡은 K-방산 문제없나

2025-11-28
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous navigation and control software for unmanned military vessels and drones. The malfunctions have directly caused operational failures and safety risks, with official military reports warning of potential loss of life if issues are not addressed. This meets the criteria for an AI Incident because the AI system's malfunction has directly led to harm or credible risk of harm to persons and critical military infrastructure. The article also discusses the implications for partner companies and the need for rigorous safety verification, reinforcing the seriousness of the incident.
Thumbnail Image

[영상] 무인정 '먹통' 드론 '추락'...'AI방산기업' 안두릴 잇단 기술결함 | 연합뉴스

2025-12-01
연합뉴스
Why's our monitor labelling this an incident or hazard?
The incidents described involve AI systems (autonomous unmanned vessels and drones with AI-based software) whose malfunctions have directly caused harm or significant risk: unmanned vessels stopped responding to commands during military exercises, a drone crash caused a large fire, and drones failed in combat due to vulnerabilities. These constitute realized harms or direct safety risks linked to AI system malfunctions in defense applications, fitting the definition of AI Incidents.
Thumbnail Image

美製Altius無人機兩度測試墜毀引質疑 台軍採購仍照常進行

2025-11-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The Altius-600M drones are AI systems used for autonomous reconnaissance and attack, so their involvement is clear. The reported crashes during testing are malfunctions of these AI systems. Although no injury, damage, or operational harm has been reported, the failures indicate a credible risk that such malfunctions could lead to harm if the drones are deployed in combat. The article does not describe any realized harm but highlights plausible future harm from these malfunctions. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

矽谷模式軍工吃癟?Anduril無人載具頻故障烏軍棄用

2025-12-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as autonomous control software (Anduril's Lattice) for unmanned vehicles and drones. The malfunctions and failures of these AI systems have directly led to operational hazards and potential harm to personnel, as well as the abandonment of equipment in active conflict, indicating realized harm. This fits the definition of an AI Incident because the AI system's malfunction has directly caused or contributed to harm (risk of injury, operational disruption) and breaches of safety obligations in military contexts.
Thumbnail Image

才交付台灣無人機 矽谷製造商新機測試掉兩架 - 國際 - 自由時報電子報

2025-11-28
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems with autonomous capabilities for military operations. The crashes during testing indicate malfunctions in the AI system's operation. While the article does not report actual harm occurring, the potential for harm in military contexts (e.g., failed missions, collateral damage) is credible. The event is not a Complementary Information piece since it focuses on the failures themselves, nor is it unrelated. It is not an AI Incident because no realized harm is reported. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

創辦人8月曾來台交付無人機 美軍工新創Anduril無人機與系統傳出問題 | 聯合新聞網

2025-11-28
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled autonomous systems (drones and unmanned vessels) developed by Anduril, whose malfunctions during military tests caused crashes and loss of control. These failures have led to operational disruptions and raised safety concerns, including warnings of extreme risk and possible loss of life if issues are not corrected. The AI system's malfunction is a direct contributing factor to these harms. Hence, the event meets the criteria for an AI Incident due to realized harm and safety risks linked to AI system failures in critical military applications.
Thumbnail Image

矽谷模式軍工吃癟?Anduril無人載具頻故障烏軍棄用 | 聯合新聞網

2025-12-01
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered autonomous systems (Anduril's Lattice software controlling unmanned vehicles and drones) that malfunctioned during military operations and tests, causing operational failures and safety risks. The Ukrainian forces' discontinuation of the drones due to frequent crashes and vulnerabilities further confirms realized harm or at least direct operational harm. The US Navy's warnings about extreme risks and potential personnel injury confirm the severity. Thus, the AI system's malfunction and use have directly led to harms or significant risks, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

8月才交付台灣 美攻擊無人機Altius試飛摔兩架 | 國際焦點 | 國際 | 經濟日報

2025-11-28
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The Altius drones are AI systems used for military reconnaissance and attack, and their failure during testing is a malfunction of these AI systems. The article describes actual test failures where the drones crashed, indicating a direct malfunction. Given the drones' intended use in combat and reconnaissance, such malfunctions could lead to harm to personnel, property, or mission failure, fulfilling the criteria for an AI Incident. The article does not merely speculate about potential harm but reports realized malfunctions, which is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

國防新創 Anduril 攻擊無人機漏氣 外界質疑 | 國際焦點 | 國際 | 經濟日報

2025-11-28
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: the Altius drones use AI-based command and control software. The failures occurred during testing (malfunction), and while no actual harm has been reported, the potential for harm in operational use is credible given the military application and previous deployment in conflict zones. The article focuses on the testing failures and their implications, not on actual harm caused, so it fits the definition of an AI Hazard rather than an AI Incident. The company's response emphasizes that failures are part of development, reinforcing that harm is potential rather than realized.
Thumbnail Image

8月才交付台灣 美攻擊無人機Altius試飛摔兩架 | 國際 | 中央社 CNA

2025-11-28
Central News Agency
Why's our monitor labelling this an incident or hazard?
The Altius drones are AI systems used for military purposes, involving autonomous or semi-autonomous flight and strike capabilities. The reported crashes during testing represent malfunctions of these AI systems. While no actual harm has occurred yet, the failures highlight credible risks that could lead to harm if the drones are deployed in combat. The article does not describe realized harm but focuses on the malfunction and its implications, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

矽谷模式軍工吃癟?Anduril無人載具頻故障烏軍棄用 | 科技 | 中央社 CNA

2025-12-01
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Anduril's autonomous system software Lattice) whose malfunction has directly led to operational failures and safety risks in military unmanned vehicles and drones. The harm includes disruption of critical military operations and potential risk of personnel injury, fulfilling the criteria for an AI Incident. The article reports realized harm and operational impact, not just potential risk, so it is not merely an AI Hazard or Complementary Information. The involvement of AI in autonomous control and decision-making is clear and central to the incident.
Thumbnail Image

今年剛來台交貨!矽谷軍工新星Anduril 旗下無人機爆美軍測試連摔

2025-11-28
工商時報
Why's our monitor labelling this an incident or hazard?
The drones are AI systems as they perform autonomous or semi-autonomous reconnaissance and combat tasks. The crashes during testing and poor performance in electronic warfare are malfunctions or failures in use that have directly led to harm in terms of operational disruption and potential risk to military personnel and assets. This fits the definition of an AI Incident because the AI system's malfunction has directly led to harm (disruption of critical military operations and potential safety risks).
Thumbnail Image

矽谷模式軍工吃癟?Anduril無人載具頻故障烏軍棄用

2025-12-01
工商時報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous control software for unmanned vehicles and drones) whose malfunction and performance failures have directly led to operational risks and potential harm to personnel safety, as well as the discontinuation of use by Ukrainian forces due to ineffectiveness and vulnerabilities. This meets the definition of an AI Incident because the AI system's malfunction and use have directly or indirectly caused harm or risk of harm to people and military operations. The article reports realized harms and operational failures, not just potential risks, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

美製無人機兩度測試墜毀引質疑 台軍採購仍照常進行 | 電訊

2025-12-01
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The drones are AI systems performing autonomous navigation and reconnaissance. The crashes during testing are malfunctions of these AI systems, directly causing harm to property and potentially undermining military operational capabilities. The article reports realized harm (crashes) and operational failure, not just potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI in the malfunction and the resulting harm to property and military readiness justifies this classification.
Thumbnail Image

剛交貨給台灣!美無人機獨角獸 Anduril 爆測試「兩度墜機」 - 自由軍武頻道

2025-11-28
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The UAVs described are AI systems as they perform autonomous navigation and complex operational tasks. The reported crashes and failures are malfunctions of these AI systems that have directly led to harm in the form of property damage and disruption of military operations. The article details actual incidents of loss of control and crashes, not just potential risks, thus meeting the criteria for AI Incidents. The involvement of AI in these malfunctions is explicit and central to the harm described.