Shield AI's Hivemind AI Successfully Pilots Military Drones and Jets in Autonomous Flight Tests

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Shield AI's Hivemind AI pilot has autonomously flown and controlled multiple military aircraft, including the Kratos MQM-178 Firejet and F-16, in successful test flights. While no harm has occurred, the development of fully autonomous, weaponizable drones and jets raises credible risks of future incidents involving injury or rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses an AI system (Hivemind) that autonomously pilots military drones and simulated fighter jets, indicating clear AI system involvement. Although the AI has not yet caused any direct harm, its integration into lethal military platforms capable of autonomous operation without human oversight presents a credible risk of significant harm in the future. The AI's ability to make independent decisions in combat scenarios could plausibly lead to injury, loss of life, or violations of human rights. Since no actual harm has occurred yet, but the potential is significant and credible, the event fits the definition of an AI Hazard rather than an AI Incident.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Severity
AI hazard

Business function:
Research and development

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

AI Is Training to Fly Drones and Fighter Jets Because Why Not

2024-03-29
autoevolution
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Hivemind) that autonomously pilots military drones and simulated fighter jets, indicating clear AI system involvement. Although the AI has not yet caused any direct harm, its integration into lethal military platforms capable of autonomous operation without human oversight presents a credible risk of significant harm in the future. The AI's ability to make independent decisions in combat scenarios could plausibly lead to injury, loss of life, or violations of human rights. Since no actual harm has occurred yet, but the potential is significant and credible, the event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Shield AI Conducts AI-Piloted Flights on Sixth Aircraft, the Kratos MQM-178 Firejet

2024-03-29
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Shield AI's Hivemind) piloting military drones, which is explicitly described. The AI system's use is in development and testing phases for autonomous military aircraft, which have significant potential for harm if deployed operationally. No actual harm or incident is reported; the flights are successful tests. Given the nature of AI-piloted military drones and their offensive capabilities, there is a plausible risk of future harm, such as injury or violations of rights, if these systems are used in combat or other operations. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not Complementary Information because the article is not primarily about responses or updates to past incidents, nor is it unrelated as it clearly involves AI systems with potential harm.
Thumbnail Image

Shield AI Conducts AI-Piloted Flights on Sixth Aircraft, the Kratos MQM-178 Firejet

2024-03-29
sUAS News - The Business of Drones
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Hivemind AI pilot) autonomously flying military drones and jets, which qualifies as an AI system. The event is about successful testing and integration, with no reported harm or malfunction. However, the nature of the AI system's application in military autonomous aircraft implies a credible risk of future harm (e.g., injury, disruption, or violations of rights) if deployed operationally. Hence, it fits the definition of an AI Hazard. It is not an AI Incident because no harm has occurred yet, and it is not Complementary Information because the article does not focus on responses or updates to prior incidents. It is not unrelated because the AI system and its potential impacts are central to the event.
Thumbnail Image

Shield AI Conducts AI-Piloted Flights on Sixth Aircraft, the Kratos MQM-178 Firejet

2024-03-29
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Shield AI's Hivemind AI pilot) controlling unmanned military aircraft, which qualifies as an AI system. The event concerns the development and testing phase, with no reported harm or malfunction causing injury, rights violations, or other harms. Given the military context and autonomous weaponized systems, there is a credible risk that such AI-piloted aircraft could lead to harms in the future, such as injury, disruption, or violations of human rights. Since no actual harm has occurred yet, but plausible future harm exists, the event is best classified as an AI Hazard.
Thumbnail Image

Shield AI and Kratos Defense Reach Milestone in AI-Piloted Flight Trials for XQ-58 Valkyrie - Travel And Tour World

2024-03-29
Travel And Tour World
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Shield AI's Hivemind AI pilot) used in autonomous military aircraft, which qualifies as AI system involvement. The article reports successful trials but no realized harm or incidents. Given the nature of autonomous military drones capable of offensive operations, there is a plausible risk that their use or malfunction could lead to injury, disruption, or other harms in the future. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event.
Thumbnail Image

Kratos and Shield AI Conduct AI-Piloted Flights on the Kratos Tactical Firejet By Investing.com

2024-04-01
Investing.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Shield AI's Hivemind) piloting military unmanned aerial systems, which qualifies as an AI system. The event is about successful testing and integration, with no reported harm or malfunction causing injury or other harms. However, the nature of the AI system—autonomous military drones capable of offensive and defensive operations—implies plausible future harm, such as injury, disruption, or violations of rights, if misused or malfunctioning. Since no actual harm has occurred yet, but the potential is credible and inherent, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses, governance, or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems and their use in a context with plausible harm.
Thumbnail Image

Combat Drone Flying With Two F-35s Attacks Targets to Prove It's a "Collaborative Killer"

2024-04-02
autoevolution
Why's our monitor labelling this an incident or hazard?
The XQ-58A Valkyrie is an AI-enabled autonomous combat drone capable of conducting electronic attacks and coordinating with manned aircraft. Its autonomous detection, targeting, and attack functions qualify it as an AI system. The event involves the use of this AI system in a military test setting. Although no harm occurred during the test, the nature of the system and its offensive capabilities imply a credible risk of future harm if deployed operationally. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to injury, disruption, or other harms through autonomous lethal actions. There is no indication of realized harm or incident in this test, so it is not an AI Incident. It is not merely complementary information because the focus is on the demonstration of autonomous attack capabilities with potential for harm, not on responses or governance.
Thumbnail Image

Kratos and Shield AI Conduct AI-Piloted Flights on the Kratos Tactical Firejet

2024-04-01
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Shield AI's Hivemind AI pilot) controlling unmanned military aircraft, which qualifies as an AI system. The event is about the development and successful testing (use) of this AI system. No direct or indirect harm is reported, so it is not an AI Incident. However, the nature of AI-piloted military drones implies a credible risk of future harm (injury, disruption, or rights violations) if deployed in combat or operational contexts. Hence, this event fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Kratos demonstrate Valkyrie drone teamed with two F-35s - Airforce Technology

2024-04-03
Air Force Technology
Why's our monitor labelling this an incident or hazard?
The Valkyrie drone is an AI system with autonomous capabilities in detection, targeting, and electronic attack, used in a military context. The event describes a successful test flight demonstrating these capabilities but does not report any harm or incident resulting from its use. Given the autonomous lethal and electronic attack functions, the system's development and deployment plausibly pose future risks of harm, qualifying this as an AI Hazard. There is no evidence of realized harm or incident, so AI Incident is not appropriate. The article is not primarily about responses or updates to prior events, so Complementary Information is not suitable. The event is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Valkyrie Tactical Drone Demos Electronic Warfare Attack for US Marines

2024-04-03
The Defense Post
Why's our monitor labelling this an incident or hazard?
The XQ-58A Valkyrie drone is an AI-enabled autonomous system performing complex electronic warfare operations without human intervention. While the demonstration was successful and no harm is reported, the deployment and use of autonomous drones with offensive capabilities pose plausible risks of harm, including injury, disruption, or violations of rights if used in conflict. Since no actual harm or incident is reported but the system's capabilities could plausibly lead to harm in future military operations, this event qualifies as an AI Hazard.
Thumbnail Image

Kratos Demonstrates XQ-58A Electronic Warfare Capabilities for United States Marine Corps

2024-04-02
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The XQ-58A Valkyrie UAV is described as an autonomous system with advanced electronic warfare payloads capable of detecting and attacking targets without human intervention, which clearly involves AI systems. Although the article reports a successful test without any harm or malfunction, the nature of the system—an autonomous combat UAV with electronic attack capabilities—implies a credible risk of future harm if deployed in conflict or misused. There is no indication of realized harm or incident in this demonstration, so it does not meet the criteria for an AI Incident. The article is not merely complementary information since it focuses on the demonstration of a system with potential for harm rather than updates or responses to past incidents. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

AI Technology Achieves New Heights with Successful Flight of Kratos MQM-178 Firejet

2024-04-02
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Hivemind AI pilot) actively controlling unmanned aircraft in military training and operational scenarios. While the article does not report any harm or malfunction resulting from the AI system's use, the deployment of AI pilots on military drones capable of offensive and defensive roles inherently carries plausible risks of harm, including injury, disruption, or other significant harms if misused or malfunctioning. Therefore, this event represents an AI Hazard, as the development and use of AI-controlled military aircraft could plausibly lead to AI Incidents involving harm or violations of rights in the future. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's operational deployment with potential for harm.
Thumbnail Image

AI-piloted drone completes test for Kratos, Shield AI - Military Embedded Systems

2024-04-02
militaryembedded.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Shield AI's Hivemind) piloting a military drone capable of autonomous air combat training and offensive/defensive operations. While no harm has been reported yet, the development and testing of such AI-enabled autonomous weapon systems plausibly could lead to significant harms, including injury or loss of life, disruption of critical infrastructure, or violations of human rights if misused or malfunctioning. Therefore, this event constitutes an AI Hazard due to the credible risk posed by the AI system's intended military applications.