AI-Enabled Drones Transforming Russia-Ukraine Warfare

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Russia and Ukraine are deploying AI-powered drones—from Russia’s S-70 Okhotnik and Inferno FPV bombers to mass-controlled swarm UAVs and Ukraine’s self-learning targeting quadcopters—to conduct autonomous reconnaissance, carpet-bomb positions, and deliver explosives, revolutionizing modern warfare and raising ethical and strategic concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses an AI system (self-teaching targeting algorithms) used in drones that directly influence physical outcomes on the battlefield by guiding explosive drones to targets. The AI's use is part of the drone's operation and directly leads to harm to Russian military forces. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to groups of people in the context of warfare. Although the article also discusses potential risks and limitations, the realized harm through AI-assisted drone strikes is clear and central to the report.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyPrivacy & data governanceHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Economic/PropertyEnvironmentalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

DJI's New Drone Could Change War -- But It's Not Supposed To Be A Weapon

2024-01-16
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of autonomous or semi-autonomous drones with advanced navigation, obstacle avoidance, and payload delivery capabilities, which qualify as AI systems. The article focuses on the potential and ongoing military use of these drones as weapons, which could lead to harm to people and communities (harm to health and communities). Although no specific new harm incident involving the FlyCart 30 is reported, the article clearly outlines the plausible future harm from its military use and the existing harm from similar DJI drones. Therefore, this constitutes an AI Hazard due to the credible risk of harm from the drone's military application, despite DJI's non-military intentions and restrictions. It is not Complementary Information because the article is not primarily about responses or updates to past incidents but about the new drone's potential impact. It is not an AI Incident because no direct or indirect harm from the FlyCart 30 is reported as having occurred yet.
Thumbnail Image

Ukraine's AI Drones Are Making War Much Deadlier for Russia

2024-01-15
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (self-teaching targeting algorithms) used in drones that directly influence physical outcomes on the battlefield by guiding explosive drones to targets. The AI's use is part of the drone's operation and directly leads to harm to Russian military forces. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to groups of people in the context of warfare. Although the article also discusses potential risks and limitations, the realized harm through AI-assisted drone strikes is clear and central to the report.
Thumbnail Image

Ukraine's AI Drones Revolutionize Warfare, Posing a Formidable Challenge to Russia | Cryptopolitan

2024-01-15
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (self-teaching targeting algorithms) integrated into drones used in warfare by Ukraine, which have a direct role in military strikes. The use of AI in targeting and autonomous control in conflict zones directly relates to harm to persons and potential violations of human rights, especially with the allegations against Israel's AI-driven air strikes targeting civilians. These are direct uses of AI systems in causing or contributing to harm, fitting the definition of AI Incidents. The article discusses realized harms and ongoing conflict implications, not just potential risks or future hazards.
Thumbnail Image

Analysis: Could Heavy Lift Delivery Drones Replace Conventional Artillery in Battle?

2024-01-17
KyivPost
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous or semi-autonomous drones used in combat, including their use to deliver explosives and conduct surveillance. The drones' AI capabilities enable targeting and payload delivery that have directly caused harm to military vehicles and personnel, fulfilling the criteria for an AI Incident. The harm is realized, not hypothetical, as the drones are actively used in warfare causing injury and property damage. The involvement of AI in the development, use, and adaptation of these drones for military purposes is clear. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Burn on the 4th of July - Angry Bear

2024-01-18
AngryBear Econ
Why's our monitor labelling this an incident or hazard?
The article describes the development and use of AI-enabled drone swarms capable of coordinated flight and control, which could plausibly lead to significant harm such as disruption of critical infrastructure or military conflict. Although no actual harm or incident is reported, the discussion of the drones' overwhelming defense capabilities and military implications indicates a credible risk of future harm. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Robots Are Fighting Robots in Russia's War in Ukraine

2024-01-30
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems in the form of unmanned ground vehicles and drones being used in active military conflict, where their deployment and attacks have led to harm. The AI systems are used for combat, surveillance, and logistics, and their use has directly contributed to harm in the war. The presence of autonomous or semi-autonomous robots and drones in combat roles, and their active engagement in attacks, meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons and communities in the war zone.
Thumbnail Image

Ukraine deploying machine-gun mounted robots to attack Putin's troops

2024-01-29
Newsweek
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the form of remotely controlled combat robots equipped with machine guns actively deployed in warfare, which directly leads to harm to persons and property. The article explicitly mentions AI-enabled combat robots used in attacks, reconnaissance, and fire support roles, indicating AI system involvement in causing harm. Therefore, this qualifies as an AI Incident due to direct harm caused by AI system use in military conflict.
Thumbnail Image

Ukraine Wants a Million Drone Army - 'Ground Drones' are Ready to

2024-01-30
The National Interest
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (unmanned aerial and ground drones with sensors, targeting, and remote control capabilities) in active military combat, which has directly caused harm to people and property. The article explicitly states these drones have been used to strike enemy tanks, bases, and positions, resulting in real harm. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to harm in a conflict setting.
Thumbnail Image

Росіяни почали використовувати в Україні "розумі" дрони з машинним зором: яку загрозу вони несуть

2024-02-11
ТСН.ua
Why's our monitor labelling this an incident or hazard?
The drones described employ machine vision, a form of AI system, to autonomously identify and engage targets, reducing pilot requirements and overcoming countermeasures. Their deployment in warfare has directly caused harm to persons and property, fulfilling the criteria for an AI Incident under the framework. The article details actual use and harm, not just potential risk, so it is not a hazard or complementary information.
Thumbnail Image

Окупанти на фронті почали використовувати "розумні" дрони: аналітик пояснив у чому небезпека

2024-02-11
unian
Why's our monitor labelling this an incident or hazard?
The event involves the use of drones with machine vision that autonomously identify and strike targets, which qualifies as an AI system due to its autonomous decision-making capabilities. The drones have been used in active combat, successfully hitting targets despite electronic countermeasures, indicating direct harm caused by the AI system's use. The harm includes damage to military assets and potential injury or death to personnel, fitting the definition of harm to persons or groups. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Росіяни почали використовувати в Україні "розумі" дрони: чим вони небезпечні

2024-02-11
Gazeta.ua
Why's our monitor labelling this an incident or hazard?
The event involves the use of autonomous drones with machine vision, which qualifies as an AI system under the definition since it infers from input (visual data) how to generate outputs (targeting and attacking decisions) influencing physical environments. The drones have been used in active combat, successfully striking targets despite electronic warfare countermeasures, indicating direct harm to persons and property. The article explicitly states these drones have been used operationally, not just in testing, and have caused damage. Although the expert notes machine vision differs from AI with neural networks, the definition includes machine vision as AI. Hence, the event is an AI Incident due to direct harm caused by the AI system's use in warfare.
Thumbnail Image

Росіяни застосовують "розумі дрони" з машинним зором - експерт

2024-02-11
ZN.UA
Why's our monitor labelling this an incident or hazard?
The drones described use machine vision to autonomously identify and attack targets, which fits the definition of an AI system influencing physical environments. Their deployment in military conflict has directly caused harm through attacks, fulfilling the criteria for an AI Incident. The expert's distinction between machine vision and AI with neural networks does not negate the autonomous decision-making and targeting functions that cause harm. The mention of future AI drone deployment by Ukraine is complementary context but does not change the classification of the current use as an incident.
Thumbnail Image

ЗС РФ застосували дрони з машинним зором - в чому особливість ворожих БПЛА - відео

2024-02-10
ФОКУС
Why's our monitor labelling this an incident or hazard?
The drones described use machine vision, a form of AI-related technology enabling autonomous target acquisition and navigation, which allows them to bypass electronic warfare defenses and successfully strike targets. This operational use has directly led to harm against Ukrainian military personnel and equipment, fulfilling the criteria for an AI Incident. Although the article distinguishes machine vision from more advanced AI, the autonomous capabilities described fit within the definition of an AI system influencing physical environments with harmful outcomes. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Армія РФ почала використовувати в Україні "розумі" дрони з машинним зором, - експерт

2024-02-11
WAR OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The event involves the use of drones with machine vision, which qualifies as an AI system component because it enables autonomous target detection and attack. The drones have been used in combat, resulting in actual harm to Ukrainian forces, fulfilling the criteria for an AI Incident. The article explicitly states that these drones can bypass electronic warfare and autonomously complete attacks once a target is fixed, indicating AI system use leading directly to harm. Although the expert distinguishes machine vision from full AI, the system's autonomous capabilities and impact on physical harm meet the definition of an AI Incident.