Weaponized Robot Dogs Raise AI Safety Concerns Amid Demonstrations and Security Flaws

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A YouTuber demonstrated a robot dog equipped with a gun in Uvalde, highlighting the dangers of weaponized AI robots. Separately, a hacker revealed a remote kill switch for such robot dogs, exposing security vulnerabilities. Both incidents underscore the potential risks and safety concerns of armed autonomous AI systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (an autonomous surgical robot) under development and testing. However, there is no indication that any harm has occurred or that the system has malfunctioned. The article focuses on the potential future use and current testing, which could plausibly lead to harm if malfunction occurs in the future, but no such incident has happened yet. Therefore, this qualifies as an AI Hazard due to the plausible future risk of harm from autonomous surgery in space, but not an AI Incident or Complementary Information since no harm or response to harm is reported.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Robots, sensors, and IT hardwareDigital securityGovernment, security, and defenceMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Public interestPsychologicalHuman or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Astronauts Will Now Be Able To Ger Surgeries On Board The ISS - Thanks To A New Autonomous Robot

2022-08-07
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) under development and testing. However, there is no indication that any harm has occurred or that the system has malfunctioned. The article focuses on the potential future use and current testing, which could plausibly lead to harm if malfunction occurs in the future, but no such incident has happened yet. Therefore, this qualifies as an AI Hazard due to the plausible future risk of harm from autonomous surgery in space, but not an AI Incident or Complementary Information since no harm or response to harm is reported.
Thumbnail Image

iRobot CEO Colin Angle on Data Privacy and Robots in the Home

2022-08-09
IEEE Spectrum: Technology, Engineering, and Science News
Why's our monitor labelling this an incident or hazard?
The article does not describe any event where the AI system's development, use, or malfunction has directly or indirectly caused harm or violation of rights. It discusses potential privacy concerns and future capabilities but clarifies that no data is shared without consent and no harm has occurred. The content is primarily informative and contextual, focusing on company policies, technology capabilities, and future prospects. Therefore, it fits the definition of Complementary Information, as it provides supporting context and understanding about AI systems and privacy without reporting an incident or hazard.
Thumbnail Image

Hacker Finds Kill Switch for Submachine Gun Wielding Robot Dog

2022-08-08
VICE
Why's our monitor labelling this an incident or hazard?
The robot dog is an AI system due to its autonomous robotic nature and weaponized capability. The hacker's discovery of a kill switch that can remotely disable the robot dog addresses a safety and security concern related to the AI system's use. Although no harm has occurred, the event highlights a plausible future risk associated with armed autonomous robots and a mitigation measure. Therefore, this qualifies as an AI Hazard because it plausibly relates to preventing or controlling potential harm from an AI system, but no actual harm or incident has been reported.
Thumbnail Image

VIDEO: YouTuber tests a gun-strapped robot dog in Uvalde

2022-08-09
mySA
Why's our monitor labelling this an incident or hazard?
The robot dog equipped with a gun is an AI system used in a way that could plausibly lead to harm, such as injury or death, if deployed in real life. The video shows testing and demonstration but no actual harm occurred. The event highlights the potential dangers of weaponizing AI-enabled robots, especially in sensitive contexts like schools. Since no real harm happened, it is not an AI Incident. It is not Complementary Information because the main focus is the demonstration of a potentially dangerous AI application, not a response or update to a prior incident. It is not Unrelated because the AI system and its use are central to the event. Hence, AI Hazard is the appropriate classification.