Israel Procures AI-Enabled Spike FireFly Loitering Munitions for Urban Combat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Israeli Ministry of Defense has ordered Rafael's Spike FireFly loitering munitions, AI-enabled drones capable of autonomously searching for and attacking targets in urban environments. While no harm has yet occurred, the deployment of these autonomous weapons poses credible risks of future injury or lethal incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Spike Firefly drones are AI systems as they autonomously loiter, search for targets, and attack by exploding on contact. The event concerns the purchase and deployment of these AI-enabled lethal autonomous weapons. Although no incident of harm is reported, the deployment of such systems plausibly could lead to injury or death (harm to persons) and disruption in conflict zones. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm from autonomous lethal force.[AI generated]
AI principles
SafetyAccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Israel Defense Ministry buys small exploding drones

2020-05-05
SpaceWar
Why's our monitor labelling this an incident or hazard?
The Spike Firefly drones are AI systems as they autonomously loiter, search for targets, and attack by exploding on contact. The event concerns the purchase and deployment of these AI-enabled lethal autonomous weapons. Although no incident of harm is reported, the deployment of such systems plausibly could lead to injury or death (harm to persons) and disruption in conflict zones. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm from autonomous lethal force.
Thumbnail Image

IDF acquires Spike Firefly loitering munition

2020-05-04
Janes.com
Why's our monitor labelling this an incident or hazard?
The Spike Firefly is described as a loitering munition designed for tactical urban combat, implying AI involvement in autonomous targeting or decision-making. The article does not report any actual harm or incidents but discusses the procurement following trials, indicating potential future use. Given the nature of AI-enabled autonomous weapons, their deployment plausibly could lead to harms such as injury or violations of human rights. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm from the use of this AI system in military operations.
Thumbnail Image

Defense Ministry order Firefly loitering munition for IDF ground forces

2020-05-04
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The FireFly loitering munition is an AI system as it uses computer vision, target tracking, and homing algorithms to autonomously or semi-autonomously engage targets. The article does not report any actual harm or incident caused by the system yet, but the deployment of AI-enabled autonomous weapons in urban combat plausibly could lead to injury or death (harm to persons) and other harms. The event is about the procurement and intended use of the system, indicating a credible risk of future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential harms are central to the event.
Thumbnail Image

Israel Defense Ministry buys small exploding drones

2020-05-04
UPI
Why's our monitor labelling this an incident or hazard?
The Spike Firefly drones are AI systems as they autonomously search for targets and execute attacks, which fits the definition of AI systems with autonomous decision-making capabilities. The article does not report any actual harm or incident caused by these drones yet, but their deployment in urban combat settings plausibly could lead to injury, death, or other harms. The event is about the procurement and intended use of these AI-enabled weapons, which constitutes a credible risk of future harm. Hence, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Israel orders Rafael Spike FireFly for IDF ground forces

2020-05-04
Shephard Media
Why's our monitor labelling this an incident or hazard?
Spike FireFly is an AI system due to its use of homing algorithms, computer vision, and target tracking for autonomous targeting and engagement. The event concerns the procurement and intended use of this system in combat, which inherently carries a credible risk of harm to persons and communities. Although no incident has occurred yet, the nature of the system and its deployment plausibly could lead to AI incidents involving injury or harm in the future. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as it describes a credible potential for harm but no realized harm is reported.
Thumbnail Image

Israel's Army awards contract to Rafael for FireFly loitering munition - Defence Blog

2020-05-07
EPeak World News
Why's our monitor labelling this an incident or hazard?
The FireFly loitering munition is an AI system due to its autonomous targeting and computer vision capabilities. Although no harm has been reported yet, the system's deployment in urban warfare and its ability to attack beyond line of sight targets indicate a credible potential for causing injury or harm to persons and communities. The article focuses on the system's capabilities and intended military use, which inherently carry risks of lethal harm. Since no incident has occurred but plausible future harm exists, this qualifies as an AI Hazard rather than an AI Incident.