US Army Tests AI-Enabled Autonomous Strike Drone in Military Exercise

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Northrop Grumman's Lumberjack drone, featuring AI-enabled autonomous targeting and precision strike capabilities, was tested by the US Army's 101st Airborne Division during Operation Lethal Eagle. The demonstration showcased the drone's ability to conduct missions with limited human input, highlighting potential future risks associated with autonomous weapon systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems integrated into the Lumberjack drone for autonomous targeting and surveillance, confirming AI system involvement. The event concerns the use and development of this AI system in a military context. Although no harm occurred during the tests, the drone's capabilities imply a credible risk of causing injury or harm in future deployments. The event does not describe any realized harm or incident but highlights a plausible future risk associated with AI-enabled autonomous weapons. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Lumberjack One-Way Attack Drone Bombs the Enemy and Sticks Around to See What Happened

2026-04-02
autoevolution
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the Lumberjack drone for autonomous targeting and surveillance, confirming AI system involvement. The event concerns the use and development of this AI system in a military context. Although no harm occurred during the tests, the drone's capabilities imply a credible risk of causing injury or harm in future deployments. The event does not describe any realized harm or incident but highlights a plausible future risk associated with AI-enabled autonomous weapons. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Northrop's Lumberjack Drone Demonstrates Integrated Autonomous Targeting and Precision Strike with Surveillance

2026-04-02
Army Recognition
Why's our monitor labelling this an incident or hazard?
The Lumberjack drone is an AI system with autonomous targeting and strike capabilities, which could plausibly lead to harms such as injury, violation of rights, or disruption in military contexts if deployed. However, the article only reports a demonstration with simulated effects and no actual harm or incident. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but has not yet caused harm. The event is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since it clearly involves an AI system with potential for harm.
Thumbnail Image

Northrop Grumman tests Lumberjack strike drone

2026-03-31
Defence Blog
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for autonomous mission control and adaptive targeting. The system is a weaponized autonomous drone capable of precision strikes, which inherently carries risks of harm to persons and communities if deployed in conflict. Although no harm or incident is reported in this demonstration exercise, the nature of the system and its intended use plausibly could lead to AI Incidents involving injury, violations of rights, or other harms. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the event.
Thumbnail Image

Northrop Grumman's Lumberjack Advances Battlefield Capabilities

2026-03-31
NEWS | Northrop Grumman
Why's our monitor labelling this an incident or hazard?
The Lumberjack system is an AI-enabled autonomous weapon platform, which qualifies as an AI system. Its development and demonstration in a military exercise suggest potential for future harm, especially given its combat role and expendable nature. However, since the article only reports on a demonstration and does not describe any harm or incidents caused by the system, it does not meet the criteria for an AI Incident. The potential for harm in the future is plausible given the system's nature, but the article does not explicitly frame this as a hazard or risk event. Therefore, the article is best classified as Complementary Information, providing context on AI system development and military applications without reporting an incident or hazard.
Thumbnail Image

Army tests autonomous strike drone featuring AI-enabled targeting capabilities

2026-04-02
DefenseScoop
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into an autonomous strike drone used in military exercises. The AI system's role in autonomous targeting and strike capabilities is central. No actual harm or incident is reported; the event is a test and demonstration. Given the autonomous lethal nature of the system, there is a credible risk that such technology could lead to injury or harm in future operational use. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not Complementary Information because it is not an update or response to a prior incident, nor is it Unrelated since it involves an AI system with potential for harm.