China Unveils Autonomous AI-Powered Robotic Shark Drones for Military Use

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China has unveiled and tested autonomous AI-powered robotic shark drones, developed by Boya Gongdao Robot Technology, for military applications including reconnaissance, search and rescue, and anti-submarine warfare. These drones, capable of lethal missions without human operators, pose significant risks of harm in future conflicts. The technology was showcased in Beijing.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes autonomous underwater drones with AI capabilities for reconnaissance and lethal missions, indicating the presence of AI systems. Although no direct harm has been reported yet, the intended use of these autonomous killer drones in warfare clearly presents a plausible risk of injury or harm to persons and communities. The development and potential deployment of such autonomous lethal weapons systems align with the definition of an AI Hazard, as they could plausibly lead to AI Incidents involving injury or harm in future conflicts. Therefore, this event is best classified as an AI Hazard rather than an AI Incident, Complementary Information, or Unrelated event.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

China unveils robotic shark drone which uses AI to fire torpedoes at enemy ships

2021-07-15
Mirror
Why's our monitor labelling this an incident or hazard?
The robotic shark drone is an AI system explicitly described as operating autonomously without an operator, performing reconnaissance and attack missions including firing torpedoes. Its deployment and testing with lethal intent directly link it to potential injury or harm to persons and military conflict. The AI system's development and use in this context meet the criteria for an AI Incident because it has directly led to or is intended for harm. The article does not merely discuss potential future harm but indicates active deployment and testing, which constitutes realized harm or imminent risk. Hence, the classification is AI Incident.
Thumbnail Image

China shows off fearsome new autonomous killer robot shark at weapons expo

2021-07-15
Daily Star
Why's our monitor labelling this an incident or hazard?
The article explicitly describes autonomous underwater drones with AI capabilities for reconnaissance and lethal missions, indicating the presence of AI systems. Although no direct harm has been reported yet, the intended use of these autonomous killer drones in warfare clearly presents a plausible risk of injury or harm to persons and communities. The development and potential deployment of such autonomous lethal weapons systems align with the definition of an AI Hazard, as they could plausibly lead to AI Incidents involving injury or harm in future conflicts. Therefore, this event is best classified as an AI Hazard rather than an AI Incident, Complementary Information, or Unrelated event.
Thumbnail Image

Chinese military's Robo-Shark drone to prey on submarines

2021-07-15
The Sunday Times
Why's our monitor labelling this an incident or hazard?
The Robo-Shark drone is an AI system due to its autonomous underwater operation and reconnaissance capabilities. While it is intended for military use with potential for harm, the article only describes its development and unveiling without any realized harm or incidents. Therefore, it represents a plausible future risk (AI Hazard) rather than an AI Incident. The event highlights the potential for harm if such AI-enabled military drones are used in conflict, but no direct or indirect harm has yet occurred.
Thumbnail Image

Chinese Military Shows Off It's Deadly, Robot Attack-And-Destroy Sharks

2021-07-16
Gistmaster
Why's our monitor labelling this an incident or hazard?
The robotic drone sharks are autonomous AI systems designed for military use with lethal capabilities. Their development and testing, especially in a tense geopolitical context like the Taiwan Strait, plausibly could lead to harm such as injury, death, or escalation of conflict. Since the article does not report any actual harm caused by these systems yet, but highlights their potential and intended use in warfare, it fits the definition of an AI Hazard. The AI system involvement is explicit, the nature of involvement is development and intended use, and the plausible future harm is credible and significant. Hence, AI Hazard is the appropriate classification.