Boeing and Shield AI Partner to Develop Autonomous AI Pilots for Military Use

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Boeing and Shield AI have signed an agreement to collaborate on integrating Shield AI’s Hivemind, an autonomous AI pilot system, into current and future military aircraft and drone swarms. This partnership aims to advance AI-driven autonomous defense capabilities, raising concerns about potential future risks from autonomous military systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The collaboration focuses on AI pilots capable of autonomous operation of aircraft and drone swarms, which are AI systems with significant potential for harm if misused or malfunctioning. Although no harm has yet occurred, the development and potential deployment of such autonomous AI systems in defense contexts plausibly could lead to incidents involving harm to persons, disruption, or violations of rights. Therefore, this event represents an AI Hazard due to the credible risk posed by the development and future use of these AI-enabled autonomous weapons systems.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and development

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Boeing and Shield AI announce collaboration - Intelligence Community News

2023-03-10
Intelligence Community News
Why's our monitor labelling this an incident or hazard?
The collaboration focuses on AI pilots capable of autonomous operation of aircraft and drone swarms, which are AI systems with significant potential for harm if misused or malfunctioning. Although no harm has yet occurred, the development and potential deployment of such autonomous AI systems in defense contexts plausibly could lead to incidents involving harm to persons, disruption, or violations of rights. Therefore, this event represents an AI Hazard due to the credible risk posed by the development and future use of these AI-enabled autonomous weapons systems.
Thumbnail Image

Boeing, Shield AI to collaborate on artificial intelligence, autonomy for defense programs

2023-03-08
english.news.cn
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (AI pilots and autonomous capabilities) in military defense applications. While no harm has yet occurred, the nature of these AI systems—autonomous AI pilots capable of operating aircraft and drone swarms in combat scenarios—presents a credible risk of future harm, such as injury, disruption, or violations of rights, if deployed or misused. Therefore, this collaboration represents an AI Hazard due to the plausible future risks associated with autonomous AI-enabled military systems.
Thumbnail Image

Boeing and AI startup join forces to work on self-flying military planes

2023-03-09
MyBroadband
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Hivemind) used for autonomous piloting of military aircraft, which fits the definition of an AI system. The event concerns the development and use of this AI technology in defense applications. However, there is no mention of any harm, malfunction, or misuse that has occurred or is occurring. The article discusses the potential and ongoing deployment but does not describe any incident or hazard event. Therefore, this is best classified as Complementary Information, as it provides context on AI development and governance in military applications without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Boeing Joins Defense Startup to Develop AI-Piloted Aircraft - BNN Bloomberg

2023-03-09
BNN
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Hivemind) with significant military applications, including autonomous piloting of aircraft in combat zones. While the article does not report any realized harm or incidents caused by the AI system, the nature of the AI system and its deployment in military contexts with autonomous capabilities plausibly pose risks of harm, such as injury, disruption, or violations of rights, if misused or malfunctioning. Therefore, this event represents a credible AI Hazard due to the plausible future harm from the use of autonomous AI pilots in military aircraft. There is no indication of actual harm or incident yet, so it is not an AI Incident. It is more than just complementary information because it highlights the potential risks and strategic implications of deploying such AI technology in defense.
Thumbnail Image

Boeing, Shield AI Set to Collaborate on Artificial Intelligence, Autonomy for Defense Programs

2023-03-09
Aviation Pros
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI pilots, autonomous drones) being developed for defense programs, which are likely to have significant implications for military operations. While no harm is reported as having occurred yet, the development and potential deployment of autonomous AI pilots and drone swarms in military contexts plausibly pose risks of harm, including injury, disruption, or violations of rights, if misused or malfunctioning. Therefore, this event represents an AI Hazard due to the credible potential for future harm stemming from these AI-enabled autonomous military systems.
Thumbnail Image

Boeing, Shield AI partner to explore solutions for defence needs

2023-03-09
Air Force Technology
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI pilots capable of autonomous flight) being developed and integrated for military use. While no harm has yet occurred, the development and potential deployment of autonomous AI pilots in military aircraft could plausibly lead to AI incidents involving harm to people, disruption, or violations of rights due to the nature of autonomous weapons and defense systems. Therefore, this event qualifies as an AI Hazard because it involves the development and use of AI systems with credible potential for future harm, but no actual harm is reported yet.
Thumbnail Image

Boeing, Shield AI Collaborate on Large AI-Piloted Aircraft

2023-03-09
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and intended use of an AI system (Hivemind) that enables autonomous operation of military aircraft in combat scenarios. This involves AI system development and use with potential for significant harm (injury, disruption, violations of rights) in warfare. Although no incident of harm is reported, the nature of the AI system and its military application plausibly lead to future AI Incidents. Hence, it qualifies as an AI Hazard rather than an Incident or Complementary Information.