Australian Army Trials AI-Enabled Mind-Controlled Robot Dogs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Australian Army has demonstrated and trialed AI-powered robot dogs controlled by soldiers' brain signals via brain-computer interfaces and augmented reality headsets. While no harm has occurred, the technology's military application and potential weaponization present credible risks of future harm or misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system (the augmented brain-robot interface) that decodes soldiers' brain signals to control robot dogs, which are robotic systems likely equipped with AI for autonomous navigation and task execution. The system is in active development and demonstrated in simulations but not reported to have caused any harm yet. Given the military application and the nature of robot dogs potentially equipped with weapons (as referenced in the related article about sniper rifles on robot dogs), there is a credible risk that such AI-enabled systems could lead to injury, disruption, or other harms in real combat. Thus, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use.[AI generated]
AI principles
SafetyRespect of human rightsDemocracy & human autonomyPrivacy & data governance

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and development

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

Soldiers Can Now Steer Robot Dogs With Brain Signals

2023-03-23
Nextgov
Why's our monitor labelling this an incident or hazard?
The article details a novel AI-enabled system involving brain signal processing and robotic control, which is currently in experimental stages with successful demonstrations. There is no indication that any harm has occurred or that the system malfunctioned. The event focuses on the development and potential future applications of the technology rather than any realized or imminent harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and updates on AI-related technological advancements and their possible future uses in defense.
Thumbnail Image

Army "Telepathically" Controls Robot Dogs In Eerie Video

2023-03-24
IFLScience
Why's our monitor labelling this an incident or hazard?
The article details a military demonstration of an AI system that uses brain signals to control robot dogs. While the technology involves AI and autonomous systems, there is no mention of any harm or incident resulting from its use. The event is a technological showcase and exploration of capabilities, not an incident or hazard with realized or imminent harm. Therefore, it is best classified as Complementary Information, providing context on AI developments in military robotics without reporting an AI Incident or AI Hazard.
Thumbnail Image

Killer robot dogs controlled by soldiers' MINDS are set for war

2023-03-23
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the augmented brain-robot interface) that decodes soldiers' brain signals to control robot dogs, which are robotic systems likely equipped with AI for autonomous navigation and task execution. The system is in active development and demonstrated in simulations but not reported to have caused any harm yet. Given the military application and the nature of robot dogs potentially equipped with weapons (as referenced in the related article about sniper rifles on robot dogs), there is a credible risk that such AI-enabled systems could lead to injury, disruption, or other harms in real combat. Thus, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

Canine sentinel: Australian army conducts trials of mind-controlled 'robot dogs' in training exercise

2023-03-27
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI decoder interpreting brain signals to control robot dogs) used in a military context with armed robots. Although the article does not report any actual harm or incidents, the use of AI-enabled armed robot dogs controlled via BCI in combat plausibly could lead to injury, harm to people, or disruption in conflict scenarios. The development and testing of such systems represent a credible risk of future harm, fitting the definition of an AI Hazard. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the trial of a potentially hazardous AI system.
Thumbnail Image

Killer robot dogs that are controlled by soldiers' MINDS are trialed by Australian army

2023-03-24
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an AI-decoder translating brain signals to control a robot dog) being developed and trialed by the Australian Army. There is no indication that harm has yet occurred, but the technology's nature—mind-controlled robotic dogs potentially used in combat—implies a credible risk of future harm such as injury or violations of rights. The event is about the development and testing phase, with no realized harm reported, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the focus is on the demonstration of a new AI-enabled military capability with inherent risks. Hence, it is best classified as an AI Hazard.
Thumbnail Image

Making mind-controlled robots a reality

2023-03-20
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a brain-computer interface that interprets brain signals to control robotic devices. However, the article does not report any harm or injury resulting from the use or malfunction of this AI system. There is no indication of realized harm or violation of rights. While the technology could plausibly lead to future harms if misused, the article focuses on the development and demonstration of the system without highlighting any risks or incidents. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI system development and potential applications without describing an incident or hazard.