US Awards Contract for AI-Enabled Attack Drones Tested in Gaza

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Department of Defense awarded Israeli company XTEND Reality Inc. a multi-million-dollar contract to supply AI-enabled attack drone kits, previously tested on civilians in Gaza. These drones, featuring autonomous swarm and precision strike capabilities, have been deployed in conflict zones, raising concerns over civilian harm and the use of AI in lethal operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems integrated into lethal drone kits used in combat operations, which directly relate to injury or harm to persons (harm category a). The AI system's use in precision strikes and swarm control indicates autonomous or semi-autonomous decision-making capabilities that influence physical environments with lethal outcomes. The article states these systems are battle-proven and deployed, indicating realized harm or at least direct involvement in harm. Hence, this is not merely a potential hazard but an incident where AI systems have contributed to harm. The presence of AI, the nature of the harm, and the operational deployment confirm classification as an AI Incident.[AI generated]
AI principles
AccountabilityRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

XTEND Secures Multi-Million-Dollar DoD Contract for AI-Enabled One-Way Attack Drone Systems

2025-11-11
DRONELIFE
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically AI-enabled autonomous drones with swarm capabilities designed for lethal military applications. The development and deployment of such AI-enabled autonomous weapons systems pose a credible risk of harm, including injury or death to persons, disruption in conflict zones, and broader human rights concerns. Although no specific harm has yet occurred as a result of this contract award, the nature of the AI system's intended use in lethal operations plausibly leads to significant harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for harm from the development and deployment of AI-enabled lethal autonomous weapons.
Thumbnail Image

OASW awards XTEND contract for ACQME-DK FPV drone kits

2025-11-12
Air Force Technology
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems integrated into lethal drone kits used in combat operations, which directly relate to injury or harm to persons (harm category a). The AI system's use in precision strikes and swarm control indicates autonomous or semi-autonomous decision-making capabilities that influence physical environments with lethal outcomes. The article states these systems are battle-proven and deployed, indicating realized harm or at least direct involvement in harm. Hence, this is not merely a potential hazard but an incident where AI systems have contributed to harm. The presence of AI, the nature of the harm, and the operational deployment confirm classification as an AI Incident.
Thumbnail Image

Israel's XTEND to Supply US With Attack Drone Kits

2025-11-12
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled tactical drones capable of autonomous swarm operations and precision strikes, which are weaponized systems. The development and deployment of such AI systems in military contexts inherently carry risks of causing injury or death and other serious harms. Since the article does not report any actual incident or harm but focuses on the contract award and system capabilities, this event constitutes an AI Hazard due to the plausible future harm from the use of AI in autonomous attack drones.
Thumbnail Image

US pays tens of millions for Israeli-made attack drones tested on civilians in Gaza - Quds News Network

2025-11-12
Quds News Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled attack drones used in combat situations that have caused harm to civilians, including psychological harm and targeted killings. The involvement of AI in these drones' operation and their deployment in conflict zones where harm has occurred meets the definition of an AI Incident. The harm is direct and significant, involving injury and psychological harm to people, thus qualifying the event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

XTEND WINS A MULTI-MILLION-DOLLAR U.S. DEPARTMENT OF WAR (DOW) CONTRACT TO DEVELOP AND DELIVER AI-ENABLED AFFORDABLE CLOSE QUARTER MODULAR ONE-WAY ATTACK DRONE (OWA) KIT

2025-11-11
CNHI News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled lethal drones with autonomous swarm capabilities designed for close-quarter combat and attack missions. The system's purpose is to deliver lethal effects remotely, which inherently involves risk of injury or death. While no actual harm is reported yet, the development and deployment of such AI-enabled autonomous weapons systems is a credible and plausible source of future harm. This fits the definition of an AI Hazard, as the event involves the development and use of AI systems that could plausibly lead to injury or harm to persons and communities. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated, as the focus is on the development and delivery of a potentially harmful AI system.
Thumbnail Image

Israeli firm XTEND wins major U.S. defense contract for AI attack drones

2025-11-15
ابنا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered attack drones used in lethal military operations, which have directly led to harm (e.g., assassination). The development, deployment, and use of these AI systems in combat clearly meet the criteria for an AI Incident, as the AI system's use has directly led to injury or harm to persons and harm to communities. The contract to produce and supply these drones further indicates ongoing and future use with similar harm potential. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

البنتاغون يتعاقد مع صانعة "الدرون" التي قتلت السنوار

2025-11-12
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as autonomously controlling attack drones used in lethal military operations, including the assassination of a militant leader. The AI system's use has directly led to harm to persons, fulfilling the criteria for an AI Incident. The article details the development, deployment, and operational use of these AI systems in warfare, which is a direct cause of injury or death, thus constituting an AI Incident rather than a hazard or complementary information.
Thumbnail Image

البنتاغون يتعاقد مع صانعة "الدرون" التي قتلت السنوار

2025-11-12
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered attack drones developed and used in lethal military operations, including the killing of a Hamas leader. The AI system's role in autonomous targeting and coordinated drone swarms directly contributes to harm to persons, fulfilling the criteria for an AI Incident. The contract with the Pentagon to produce and deploy these systems further confirms ongoing use and potential for harm. Hence, this is not merely a hazard or complementary information but an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

المركزية | البنتاغون يتعاقد مع صانعة "الدرون" التي قتلت السنوار

2025-11-12
المركزية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered autonomous attack drones developed by Extend, which have been used in lethal military operations, including the killing of a Hamas leader. The AI system's use in targeting and executing strikes directly led to harm (death), fulfilling the criteria for an AI Incident. The contract with the Pentagon to further develop and deploy these systems confirms ongoing use and harm potential. Hence, this is not merely a hazard or complementary information but an incident involving realized harm caused by AI systems.
Thumbnail Image

البنتاغون يتعاقد مع صانعة

2025-11-12
Beirut Times
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems in autonomous weaponry, specifically AI-powered attack drones capable of coordinated autonomous operations. The use of such systems in lethal military actions, including targeted killings, directly relates to potential harm to persons and communities. The deployment of AI-enabled offensive drones constitutes a significant AI Incident due to the direct involvement of AI in causing harm through military operations. The article reports actual use and deployment, not just potential or future risks, thus qualifying as an AI Incident rather than a hazard or complementary information.