AI-Powered Autonomous Weapon Tested by U.S. Military

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Allen Control Systems, led by Steven Simoni, has developed the Bullfrog, an AI-powered autonomous machine gun designed to shoot down drones. The system, tested and partially deployed with the U.S. military, demonstrates the use of AI in lethal autonomous weapons, raising concerns about potential malfunctions and risks of harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Bullfrog is an AI-powered autonomous weapon system designed to shoot drones, which are battlefield threats. Its deployment and testing with the US military indicate active use of AI in lethal autonomous systems. The article reports actual use cases where drones were shot down and also instances of malfunction (gun jamming), which could lead to harm. The AI system's role in enabling autonomous lethal force directly implicates it in potential injury or death, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future risks but describes ongoing development, testing, and partial deployment, with real-world implications for harm in military operations.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

How a Silicon Valley 'warlord' got the Pentagon's attention

2025-10-01
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the Bullfrog autonomous machine gun) developed and tested with U.S. Army contracts. The system autonomously targets and shoots drones, which involves AI decision-making in lethal force application. While no actual harm or malfunction causing injury or damage is reported, the nature of the system and its intended use clearly pose a plausible risk of harm, fulfilling the criteria for an AI Hazard. The article does not describe any realized harm or incident resulting from the AI system's use, so it cannot be classified as an AI Incident. It is not merely complementary information because the focus is on the AI system's development and potential risks rather than responses or ecosystem context. Therefore, the appropriate classification is AI Hazard.
Thumbnail Image

DoorDash To Defence Tech: Silicon Valley 'Warlord' Gets Government Attention

2025-10-01
NDTV
Why's our monitor labelling this an incident or hazard?
The Bullfrog is an AI-powered autonomous weapon system designed to shoot drones, which are battlefield threats. Its deployment and testing with the US military indicate active use of AI in lethal autonomous systems. The article reports actual use cases where drones were shot down and also instances of malfunction (gun jamming), which could lead to harm. The AI system's role in enabling autonomous lethal force directly implicates it in potential injury or death, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future risks but describes ongoing development, testing, and partial deployment, with real-world implications for harm in military operations.
Thumbnail Image

From DoorDash to drone defence: How a Silicon Valley 'warlord' got the Pentagon to take his AI machine gun seriously

2025-10-02
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-powered autonomous machine gun system that has been tested and is being integrated into military platforms. The system's use involves autonomous targeting and shooting of drones, which is a direct application of AI in a lethal context. The malfunction (gun jam) during testing further indicates AI system operation and potential risks. The involvement of the AI system in a weapon capable of causing injury or death meets the criteria for harm to persons or communities. Hence, this is an AI Incident rather than a hazard or complementary information, as harm is directly linked to the AI system's use and malfunction.