Ukraine Deploys Autonomous AI-Guided FPV Drones in Combat Despite Jamming

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ukrainian forces have begun using FPV drones equipped with AI-powered autonomous targeting systems that can lock onto and strike targets even after losing communication due to electronic warfare. These drones have successfully destroyed enemy assets, marking a significant escalation in the use of AI-driven lethal autonomous weapons in active conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and potential use of AI-enabled autonomous guidance systems for drones, which are AI systems by definition as they infer from input to generate outputs influencing physical environments (targeting and navigation). The article does not report any realized harm yet but highlights the plausible future harm from deploying such autonomous drones in warfare, including injury or death and property damage. The autonomous targeting capability could bypass existing countermeasures, increasing risk. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication of current harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the development and potential impact of the AI system.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsDemocracy & human autonomyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (death)

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Ukraine Rolls Out Target-Seeking Terminator Drones

2024-03-21
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (machine vision-based autonomous attack drones) being used in active conflict to identify and attack targets without human control, resulting in destruction and harm. The AI system's use has directly led to harm (destruction of enemy vehicles and potential risk to civilians). The event involves the use and deployment of AI systems causing physical harm, fitting the definition of an AI Incident. The ethical concerns and potential for misuse further support the classification, but the realized harm is the key factor.
Thumbnail Image

Ukrainian troops successfully use autonomous FPV drone for the first time

2024-03-20
Defence Blog
Why's our monitor labelling this an incident or hazard?
The drone is described as having autonomous targeting capabilities, which implies the presence of an AI system controlling its actions. The event involves the use of this AI system in a military context to strike and destroy a tank, which constitutes harm to property and potentially injury or harm to persons. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under the framework. The event is not merely a potential risk but a realized harm caused by the AI system's operation.
Thumbnail Image

Ukrainian Forces Reportedly Developing Autonomous Guidance System for FPV Drones

2024-03-21
KyivPost
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of AI-enabled autonomous guidance systems for drones, which are AI systems by definition as they infer from input to generate outputs influencing physical environments (targeting and navigation). The article does not report any realized harm yet but highlights the plausible future harm from deploying such autonomous drones in warfare, including injury or death and property damage. The autonomous targeting capability could bypass existing countermeasures, increasing risk. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication of current harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the development and potential impact of the AI system.
Thumbnail Image

Ukraine, Russia Compete to Develop Jam-Proof FPV Drones

2024-03-22
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled autonomous drones that identify and strike military targets, directly causing harm in an active conflict zone. The AI system's use in lethal autonomous weaponry leads to injury and harm to persons and communities, fulfilling the criteria for an AI Incident. The involvement is through the use of AI systems in operational attack drones, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Ukraine begins using FPV drones with post-target-lock autonomous homing

2024-03-20
Euromaidan Press
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems in the form of autonomous drones with machine vision and automatic target acquisition that have been used to successfully strike targets despite electronic warfare interference. The autonomous continuation of the mission after communication disruption shows AI system malfunction resilience and autonomous decision-making leading to physical harm. The deployment of these drones in an active conflict zone implies direct harm to persons or groups, fulfilling the criteria for an AI Incident. The mention of fundraising and plans for mass production further supports the ongoing use and harm caused by these AI systems. The Russian use of similar drones also indicates the widespread deployment of such AI systems in warfare, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

FPV drones with autonomous guidance system emerge in Ukraine

2024-03-20
Militarnyi
Why's our monitor labelling this an incident or hazard?
The drones described have autonomous guidance systems, which qualify as AI systems due to their ability to infer and act on input (target locking) to generate outputs (attack actions) influencing physical environments. The event reports actual use of these drones in military operations where they successfully hit targets, implying direct harm to property and possibly persons. The autonomous operation after communication loss shows AI system malfunction resilience or design, contributing to the harm. Hence, this is an AI Incident as the AI system's use has directly led to harm in a conflict context.
Thumbnail Image

Ukrainian Armed Forces start using FPV drones with auto-guidance on the front line: what is known about them Video

2024-03-21
Obozrevatel
Why's our monitor labelling this an incident or hazard?
The drones described employ AI systems for automatic target recognition and attack, which is a direct use of AI in a military context. The event reports actual deployment and use of these AI-enabled drones in combat, which inherently causes harm to persons and property (harm category a and d). The malfunction or interference by electronic warfare is noted, but the AI system continues to operate autonomously, indicating AI's pivotal role in the incident. Therefore, this is an AI Incident due to the realized harm caused by the AI system's use in warfare.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
ABC News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—autonomous drone swarms with complex AI-driven coordination and decision-making capabilities. The event concerns the development and military use of these AI systems, which could plausibly lead to harms including conflict escalation, violations of human rights, and destabilization of international security. No actual harm or incident is reported as having occurred yet, but the credible risk and potential for significant harm are well established. Hence, this is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

American drones are glitching and getting lost in Ukraine, giving way to a flood of Chinese drones

2024-04-10
Business Insider
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous or semi-autonomous functions in military operations. The reported issues such as drones getting lost, failing to take off, or not returning home indicate malfunctions in the AI systems. These malfunctions have direct consequences on military operations, which can lead to harm to persons or groups involved in the conflict, fulfilling the criteria for an AI Incident. The article does not merely speculate about potential harm but reports actual operational failures impacting the conflict, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ukraine is creating AI-powered drone to target Russian troops

2024-04-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous drones capable of identifying and striking targets without human operators, which have been used in combat resulting in destruction of military assets and casualties. This constitutes direct harm to people and property caused by the use of AI systems. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

US-China competition to field military drone swarms could fuel...

2024-04-12
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous drone swarms with complex AI-driven coordination and decision-making capabilities. It focuses on the development and military use of these AI systems and the plausible future harms they could cause, such as increased global instability, conflict escalation, and potential misuse by rogue actors. No actual harm or incident is reported as having occurred yet, but the credible risk of harm is well established. Hence, this is an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk and competition driving the development of these AI systems with potential for harm, not on responses or updates to past incidents. It is not unrelated because the event is clearly about AI systems and their military use with significant risk implications.
Thumbnail Image

Ukraine developing 'unstoppable' AI-powered attack drone with Western backing

2024-04-09
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems in autonomous attack drones that can independently identify and engage targets, which directly relates to harm in armed conflict (injury, death, destruction). The AI system's role in enabling autonomous lethal action makes this a clear AI Hazard, as the article does not report a specific incident of harm caused yet but describes the credible potential for harm through autonomous lethal operations. The article also notes controversy and concerns about the use of such drones, reinforcing the plausible risk. Therefore, this is classified as an AI Hazard due to the plausible future harm from the AI system's autonomous lethal capabilities.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems embedded in military drone swarms that can autonomously coordinate and execute missions, which fits the definition of an AI system. The event concerns the development and potential use of these AI systems in military applications, which could plausibly lead to harms including conflict escalation, violations of human rights, and harm to communities. No actual harm or incident is reported yet, but the credible risk and warnings about proliferation and instability justify classification as an AI Hazard rather than an AI Incident. The article also mentions governance challenges and calls for cooperation, but these are contextual and do not change the primary classification.
Thumbnail Image

US-China Competition to Field Military Drone Swarms Could Fuel Global Arms Race

2024-04-12
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—military drone swarms with autonomous capabilities—and discusses their development and intended use in warfare. The harms described (escalation of conflict, proliferation to rogue actors, instability) are plausible future harms that could result from these AI systems. There is no report of actual harm or incident caused by these systems yet, only a credible risk and ongoing arms race. Thus, the event fits the definition of an AI Hazard, not an AI Incident. It is not Complementary Information because the article is not primarily about responses or updates to past incidents, nor is it unrelated as it clearly concerns AI systems and their risks.
Thumbnail Image

Ukraine making 'unstoppable' attack drone powered by robotic super brain

2024-04-08
EXPRESS
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems because they use image recognition and autonomous targeting to reach enemy targets without full human control. Their use in military attacks directly leads to harm to persons and property, fulfilling the criteria for an AI Incident. The article reports ongoing use and development, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident due to the AI system's direct involvement in causing harm in an armed conflict.
Thumbnail Image

Competition to field military drone swarms could fuel global arms race between the US and China

2024-04-12
PBS.org
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-enabled drone swarms capable of autonomous coordinated operations. The event is about the development and military use of these AI systems and the associated risks. No actual harm or incident has occurred yet, but the article emphasizes the credible risk that these AI systems could lead to significant harm in the future, including armed conflict and proliferation to malicious actors. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the focus is on AI systems and their potential harms.
Thumbnail Image

Ukraine AI 'one-way' drones will map, hunt and strike Russian targets from afar

2024-04-09
Daily Star
Why's our monitor labelling this an incident or hazard?
The described AI system is an autonomous lethal drone capable of independently identifying and attacking targets, which directly relates to the use of AI in military operations. Although the article does not report a specific incident of harm caused by these drones yet, the deployment of such autonomous weapons systems in an ongoing war plausibly leads to significant harm, including injury or death to persons and damage to property. The AI system's role in lethal targeting and autonomous engagement presents a credible risk of harm, qualifying this as an AI Hazard. There is no indication that harm has already occurred from these specific AI drones, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the imminent use of AI systems with lethal potential.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous drone swarms with complex behaviors and semi-autonomous decision-making capabilities. It focuses on the development and military use of these AI systems, which could plausibly lead to harms such as armed conflict escalation, violations of human rights, and destabilization of global security. No actual incident of harm is reported, but the credible risk of future harm from these AI systems is central to the article. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race - The Boston Globe

2024-04-12
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: AI software enabling autonomous drone swarms with complex behaviors and semi-autonomous operations. The article focuses on the development and potential use of these AI systems in military contexts, which could plausibly lead to harms such as conflict escalation, violations of human rights, and destabilization of global security. Since no actual harm or incident is reported as having occurred yet, but the risk of such harm is credible and significant, this qualifies as an AI Hazard. The article does not describe a realized AI Incident, nor is it merely complementary information or unrelated news.
Thumbnail Image

U.S.-China competition to field military drone swarms could fuel global arms race

2024-04-12
Washington Times
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems embedded in military drone swarms that can autonomously coordinate and execute missions, which fits the definition of AI systems. The event is about the development and potential use of these AI-enabled weapons, which could plausibly lead to significant harms such as conflict escalation, violations of human rights, and destabilization of global security. Since no actual harm or incident is reported yet, but the risk is credible and well-articulated, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their potential harms, so it is not Unrelated.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
Star Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems integrated into military drones that operate autonomously or semi-autonomously in swarms, which fits the definition of AI systems. The event is about the development and use of these AI systems in military contexts, with a focus on the plausible future harms such as escalation of conflict, proliferation to rogue actors, and global arms race instability. No actual harm or incident is reported as having occurred yet, but the credible risk of harm is clearly articulated. Hence, it is an AI Hazard rather than an AI Incident. The article is not merely general AI news or complementary information, as it centers on the risk posed by these AI-enabled military drone swarms.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race - WTOP News

2024-04-12
WTOP
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems embedded in military drone swarms that coordinate autonomously and can adapt mid-mission, indicating clear AI involvement. Although no direct harm or incident has yet occurred, the article outlines credible risks of future harm including global arms race escalation, conflict, and misuse by non-state actors. The development and deployment of these AI-enabled drone swarms could plausibly lead to AI Incidents involving harm to people, communities, and international security. Since the harms are potential and not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their risks, so it is not Unrelated.
Thumbnail Image

U.S.-China competition to field military drone swarms could fuel global arms race

2024-04-12
The Montreal Gazette
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-enabled drone swarms) and concerns their development and potential use in military conflict. While no direct harm has been reported yet, the article emphasizes the credible risk of increased global instability and conflict due to the proliferation of these AI-powered weapons. This fits the definition of an AI Hazard, as the development and competition in AI military drone swarms could plausibly lead to an AI Incident involving harm to communities or critical infrastructure. There is no indication of realized harm or incident, so it is not an AI Incident. It is not Complementary Information because the article focuses on the risk and competition itself, not on responses or updates to past incidents. It is not Unrelated because the AI system and plausible harm are central to the article.
Thumbnail Image

US-China competition to field military drone swarms fuels global arms race

2024-04-12
TRT World
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-enabled drone swarms with autonomous capabilities. The event is about the development and military use of these AI systems, which could plausibly lead to significant harms such as conflict escalation, civilian targeting, and destabilization. Since no actual harm or incident has been reported, but the risk is credible and well articulated, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is on the potential risks and arms race dynamics, not on responses or updates to past incidents. It is not an AI Incident because no realized harm has occurred yet.
Thumbnail Image

Ukraine creating AI-powered super-drones that can hunt down Putin's troops

2024-04-09
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous drones with AI-powered image recognition and navigation capabilities used in military operations. The AI system's use has directly led to harm, including destruction of military property and likely injury or death to personnel, fulfilling the criteria for an AI Incident. The article details actual deployment and successful strikes, not just potential or planned use, confirming realized harm. Therefore, this is an AI Incident involving the use of AI in lethal autonomous weapons causing direct harm.
Thumbnail Image

UP NEXT: THE 'SWARMS' RACE. US-China competition to field military drone swarms could fuel global arms race | Frank Bajak | AP Technology Writer

2024-04-12
BusinessMirror
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems embedded in military drones that operate autonomously or semi-autonomously in swarms, coordinating attacks and adapting to new objectives without direct human orders. The development and use of these AI systems in military contexts could plausibly lead to harms such as conflict escalation, civilian casualties, and global instability. Although no specific incident of harm is reported, the credible risk and potential for misuse or proliferation of these AI-enabled weapons fit the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the risks posed by these AI systems.
Thumbnail Image

Ukraine uses new type of suicide drones against Russian missile systems

2024-04-10
Defence Blog
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems because they autonomously acquire and engage targets using machine vision and target auto-tracking, which are AI capabilities. Their deployment has directly led to the destruction of Russian missile systems, which is harm to property and military infrastructure. This fits the definition of an AI Incident, as the AI system's use has directly caused harm. The event is not merely a potential hazard or complementary information but a realized incident involving AI systems causing harm.
Thumbnail Image

Ukraine 'Army of Drones' Destroys 229 Russian Military Vehicles in One Week

2024-04-12
The Defense Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous drones used in military operations. Their use has directly led to harm, including destruction of military vehicles and casualties, which qualifies as injury or harm to groups of people (military personnel) and harm to property (military equipment). Therefore, this is an AI Incident due to the direct involvement of AI-enabled autonomous systems causing harm in an armed conflict context.
Thumbnail Image

Ready for the race: Air separation drone swarms vs. air defence systems

2024-04-11
Shephard Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (drone swarms with AI/ML-enabled coordination and semi-autonomy) and their development and potential use in military contexts. Although no direct harm or incident has occurred yet, the article highlights credible and plausible future harms such as overwhelming air defense systems, reconnaissance, cyber and electronic warfare, and potential use as suicide weapons. This fits the definition of an AI Hazard, as the development and near-future deployment of these AI-enabled drone swarms could plausibly lead to significant harms on the battlefield. There is no indication of realized harm or incident at this stage, so it is not an AI Incident. The article is not primarily about governance responses or updates, so it is not Complementary Information. It is clearly related to AI systems and their military application, so it is not Unrelated.
Thumbnail Image

Ukraine is creating AI-powered drone to target Russian troops

2024-04-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-enabled autonomous drones) in active military conflict, where their deployment has directly led to harm including destruction of military property and casualties among personnel. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and property. The article also discusses the development and deployment of these systems, confirming AI involvement and realized harm rather than just potential risk. Therefore, the classification is AI Incident.
Thumbnail Image

Iranian and Chinese Drones Are Overwhelming American Warships -- But This High-Tech Weapon Could Zap Them Out of the Sky

2024-04-10
Popular Mechanics
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses autonomous or semi-autonomous drones used in hostile attacks, which are AI systems by definition. These drones have caused harm including injuries to U.S. soldiers and damage to vessels, fulfilling the criteria for harm to persons and property. The microwave weapon system is an AI-enabled countermeasure designed to neutralize these drones. The presence of AI systems in offensive drone swarms and their role in ongoing attacks constitutes an AI Incident. The article does not merely discuss potential harm or future risks but describes realized harms and ongoing conflict involving AI systems. Hence, the classification as AI Incident is appropriate.