US Army Explores Weaponizing AI-Enabled Robot Dogs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Army is experimenting with mounting advanced rifles on AI-enabled quadruped robots, such as Ghost Robotics' Vision 60, to enhance combat capabilities. While these weaponized robot dogs are not yet deployed, their development raises significant ethical and safety concerns about potential future harm from autonomous or semi-autonomous armed AI systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and use of AI systems integrated into robotic dogs that can autonomously select and engage targets with mounted firearms. This clearly fits the definition of an AI system with autonomous lethal capabilities. Although no harm has yet occurred, the plausible future use of such AI-enabled weaponized robots in combat could lead to injury or harm to persons, constituting an AI Hazard. The article does not report any actual harm or incidents but highlights the credible risk and ethical concerns associated with this technology's development and potential deployment.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital securityMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Gamifying War: US Army plans to mount assault rifles, LMGs on AI-enabled robot dogs

2023-09-01
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems integrated into robotic dogs that can autonomously select and engage targets with mounted firearms. This clearly fits the definition of an AI system with autonomous lethal capabilities. Although no harm has yet occurred, the plausible future use of such AI-enabled weaponized robots in combat could lead to injury or harm to persons, constituting an AI Hazard. The article does not report any actual harm or incidents but highlights the credible risk and ethical concerns associated with this technology's development and potential deployment.
Thumbnail Image

US Army Brags About Plans to Mount Rifle on Robot Dog

2023-08-31
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (robotic quadrupeds with autonomous navigation capabilities) being developed and experimented with for weaponization. Although no harm has yet occurred, the use of armed AI-enabled robots in combat could plausibly lead to injury or harm to persons, making it a credible future risk. The article explicitly states these are experiments and demonstrations, not yet formal service-wide programs, so no realized harm exists yet. Hence, it fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

The Army Wants to Slap a Next Generation Squad Weapon on a Robot Dog

2023-08-28
Military
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of semi-autonomous or autonomous robot dogs equipped with weapons. Although no harm has yet occurred, the development and potential deployment of these armed robots plausibly could lead to AI Incidents involving physical harm, ethical violations, and other serious consequences. The article explicitly mentions concerns from robotics companies about risks and ethical issues, and the military's ongoing exploration of these capabilities. Since the harm is potential and not realized, the classification is AI Hazard rather than AI Incident.
Thumbnail Image

Quadruped robot dogs get XM7 guns before GIs, says US Army

2023-08-30
theregister.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of quadruped robots with advanced mobility and remote control capabilities, which are being developed to carry weapons. Although the robots currently require human operators to fire and are not autonomous weapons, the development and potential future deployment of armed robot dogs could plausibly lead to harms such as injury or violation of human rights. Since no harm has yet occurred and the article focuses on the concept and demonstrations rather than an actual incident, this qualifies as an AI Hazard rather than an AI Incident. The article also includes contextual information about societal concerns but the primary focus is on the potential future risk posed by these AI-enabled armed robots.
Thumbnail Image

US Army Mulls Outfitting Robot Dogs With Next-Gen Squad Weapon

2023-08-30
The Defense Post
Why's our monitor labelling this an incident or hazard?
The Vision 60 Quadruped Unmanned Ground Vehicles are autonomous or semi-autonomous systems that can navigate complex terrain, indicating AI system involvement. The article discusses the potential weaponization of these AI-enabled robots, which raises credible concerns about future harm, including injury and ethical violations. Since no harm has yet occurred and the deployment is still under consideration, this event is best classified as an AI Hazard rather than an AI Incident. The article also references societal and governance responses, but the main focus is on the potential risks of weaponizing AI robots, supporting the AI Hazard classification.