AI-Guided Drones Cause Majority of Losses in Ukraine War

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-enabled autonomous drones, particularly the HX-2 kamikaze drone supplied by Helsing, have been responsible for 80–90% of losses on both sides in the Ukraine war. These loitering munitions, guided by AI systems, are actively deployed and tested by the Ukrainian military and the German Bundeswehr, raising significant concerns about AI-driven warfare.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly references autonomous drone systems (kamikaze drones) undergoing testing for military use, which involve AI for autonomy. Although no incident of harm has occurred yet, the development and potential deployment of such AI-enabled autonomous weapons plausibly could lead to significant harms, including injury or death in conflict, disruption, and broader societal impacts. The discussion about procurement and strategic emphasis on autonomous systems indicates credible future risks. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (death)Economic/Property

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Drohnen dominieren Ukraine-Krieg: Zu viele Panzer: Helsing-Chef beklagt falsche Rüstungsprioritäten

2025-11-02
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous drones with AI guidance for targeting and attack, which have directly contributed to harm (losses in the Ukraine war). The use of these AI-enabled loitering munitions has materially influenced the conflict, causing injury and death, thus meeting the criteria for an AI Incident. The article also discusses the development, deployment, and testing of these systems, confirming their active use and harm caused. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rüstung: Helsing wirbt für massive Aufrüstung der Bundeswehr mit Drohnen

2025-11-02
Handelsblatt
Why's our monitor labelling this an incident or hazard?
The article explicitly references autonomous drone systems (kamikaze drones) undergoing testing for military use, which involve AI for autonomy. Although no incident of harm has occurred yet, the development and potential deployment of such AI-enabled autonomous weapons plausibly could lead to significant harms, including injury or death in conflict, disruption, and broader societal impacts. The discussion about procurement and strategic emphasis on autonomous systems indicates credible future risks. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Drohnen erzielen im Ukraine-Krieg meiste Treffer

2025-11-02
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems supporting the guidance of loitering munitions (drones) that have caused significant losses in the Ukraine war, which constitutes direct harm to persons and property. The use of these AI-enabled autonomous weapons systems in active combat meets the definition of an AI Incident, as the AI's use has directly led to injury and harm. The discussion of procurement and trials further confirms the operational deployment of these AI systems. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rüstungsunternehmer fordert mehr Investitionen in Drohnen

2025-11-02
ZDFheute
Why's our monitor labelling this an incident or hazard?
The article explicitly references autonomous drone systems used in military contexts, which involve AI for autonomous operation. Although no direct harm is described in this article, the production and deployment of such AI-enabled autonomous weapons inherently carry significant risks of harm, including injury and violations of human rights. The event is thus best classified as an AI Hazard, reflecting the credible potential for AI-related harm in the future due to these systems' development and use.
Thumbnail Image

Drohnen im Ukraine-Konflikt: Neue Prioritäten für die Bundeswehr

2025-11-02
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous drones with AI guidance) and discusses their use in an active conflict, which inherently involves harm. However, the article does not describe a specific AI Incident (a particular event where AI use directly or indirectly caused harm) or an AI Hazard (a specific event or circumstance where AI use could plausibly lead to harm). Instead, it provides complementary information about the evolving military and policy landscape regarding AI-enabled autonomous weapons. Therefore, the article is best classified as Complementary Information, as it informs about the broader context and strategic considerations without reporting a new incident or hazard.
Thumbnail Image

Bundeswehr prüft Einsatz von KI-gestützten Kamikazedrohnen

2025-11-02
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI-supported kamikaze drones and autonomous drones) and discusses their use and testing by the Bundeswehr. However, it does not describe any realized harm or incident resulting from their deployment by the Bundeswehr. The focus is on the potential and strategic implications of these AI systems, including budgetary and industrial considerations. Since the article highlights the plausible future use and associated risks of AI-enabled autonomous weapons systems without reporting actual harm or incidents, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Німеччина постачає Україні дрони, схожі на російський "Ланцет"

2025-11-12
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous navigation and AI software for target recognition) in military drones, which are being mass-produced and integrated into active conflict logistics. While the use of AI in armed drones inherently carries a credible risk of harm (injury, death, or property damage) due to their military application, the article does not describe any realized harm or incidents resulting from their deployment. Therefore, this situation represents a plausible future risk of harm from AI systems, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the AI system's capabilities and production with potential for harm, not on responses or updates to past incidents.
Thumbnail Image

Німеччина постачає в Україну дрони, що вражають цілі на 100 км- Новини bigmir)net

2025-11-12
bigmir)net
Why's our monitor labelling this an incident or hazard?
The HX-2 drone is described as having autonomous navigation systems and the ability to operate without GNSS by matching maps and images, which indicates AI system involvement. The article focuses on the production scale-up and deployment to a conflict zone, implying potential future use that could lead to harm. No direct harm from the AI system is reported in the article, so it does not qualify as an AI Incident. The event is not merely complementary information since it highlights the potential for harm from the autonomous AI-enabled weapon system. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

10 000 HX-2 для України: німецький виробник БПЛА готується до масштабного серійного випуску

2025-11-12
5 канал
Why's our monitor labelling this an incident or hazard?
The HX-2 drone is equipped with AI systems for autonomous navigation and target recognition, which are integral to its operation in combat. The article discusses the scaling up of production and imminent integration into frontline operations, implying imminent use in warfare. While no specific harm has yet been reported, the nature of the AI system's use in armed drones inherently carries a credible risk of injury, death, and violations of human rights. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to persons and communities. There is no indication of an actual incident or harm having occurred yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it highlights the potential for significant harm from AI-enabled weapon systems.
Thumbnail Image

Дрони ЗСУ: німецький HX-2 отримає ШІ для розпізнавання перешкод -- деталі

2025-11-12
ФОКУС
Why's our monitor labelling this an incident or hazard?
The event involves an AI system under development for a military drone, which is an AI system by definition due to its autonomous target recognition and environmental adaptation capabilities. However, there is no indication that the AI system has caused any harm or malfunction yet. The article discusses future improvements and testing, implying potential future risks but no current incidents. Therefore, this qualifies as an AI Hazard because the AI system's development and intended use in a military strike drone could plausibly lead to harm, but no harm has yet occurred or been reported.
Thumbnail Image

Росіяни у страху: тепер Україна має свої німецькі "Ланцети"

2025-11-12
Комментарии Украина
Why's our monitor labelling this an incident or hazard?
The HX-2 drone is an AI system as it uses autonomous navigation and AI software for target recognition and adaptation to countermeasures. Its deployment in Ukraine's military operations means the AI system's use directly leads to harm (injury or death) to persons or groups, fulfilling the criteria for an AI Incident. The article reports actual use and production, not just potential risk, so it is not merely a hazard. The harm is direct and ongoing due to the drone's combat role.
Thumbnail Image

Німеччина масштабує виробництво та постачає Україні дрони, схожі на російський "Ланцет", - Defense Express

2025-11-12
censor.net
Why's our monitor labelling this an incident or hazard?
The HX-2 drone is an AI-enabled system due to its autonomous navigation features and complex operational capabilities. Its deployment in a conflict zone and use as a weapon system directly relates to potential harm, including injury or harm to persons and harm to communities. The article describes active use and production of these drones, which are capable of causing physical harm. Therefore, this event involves the use of an AI system that has directly or indirectly led to harm or the potential for harm in an armed conflict context, qualifying it as an AI Incident.
Thumbnail Image

Розкриті технічні характеристики німецького бойового дрона Helsing HX-2

2025-11-13
InternetUA
Why's our monitor labelling this an incident or hazard?
The Helsing HX-2 drone is an AI system with autonomous target recognition and strike capabilities. Its deployment in combat, including successful autonomous strike missions, directly involves AI in causing harm to persons and property in warfare. The article explicitly states the use of AI for autonomous target identification and attack, with real-world operational use and successful strikes. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in a military context. Although the operator confirms the strike, the AI system's autonomous functions are pivotal in the harm caused. Hence, this is not merely a hazard or complementary information but an AI Incident.