US Air Force Demonstrates AI-Enabled Manned-Unmanned Combat Teaming

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

General Atomics and the US Air Force conducted a demonstration at Edwards Air Force Base, California, where an AI-enabled MQ-20 Avenger drone autonomously coordinated with a manned F-22 fighter. The exercise showcased advanced autonomy, including independent decision-making and tactical maneuvers, highlighting potential future risks of autonomous military systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems in the form of autonomy software controlling an armed drone in coordination with a manned fighter jet. While the event is a demonstration without reported harm, the nature of the AI system—autonomous control of lethal military drones—carries credible risks of future harm such as injury or violations of rights if deployed operationally. The article focuses on the technical and doctrinal development of AI-enabled combat autonomy, which fits the definition of an AI Hazard: an event where AI system development and use could plausibly lead to harm. There is no evidence of realized harm or malfunction, so it is not an AI Incident. It is not merely complementary information because the main focus is the demonstration of AI autonomy with potential future risks, not a response or update to a prior incident. Therefore, the correct classification is AI Hazard.[AI generated]
AI principles
AccountabilitySafety

Industries
Government, security, and defence

Severity
AI hazard

AI system task:
Reasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

U.S. Air Force Pairs F-22 Fighter With MQ-20 Combat Drone in Live Manned-Unmanned Combat Drill

2026-02-23
Army Recognition
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of autonomy software controlling an armed drone in coordination with a manned fighter jet. While the event is a demonstration without reported harm, the nature of the AI system—autonomous control of lethal military drones—carries credible risks of future harm such as injury or violations of rights if deployed operationally. The article focuses on the technical and doctrinal development of AI-enabled combat autonomy, which fits the definition of an AI Hazard: an event where AI system development and use could plausibly lead to harm. There is no evidence of realized harm or malfunction, so it is not an AI Incident. It is not merely complementary information because the main focus is the demonstration of AI autonomy with potential future risks, not a response or update to a prior incident. Therefore, the correct classification is AI Hazard.
Thumbnail Image

A US Air Force F-22 Raptor just showed off how it might work with a loyal wingman-type drone in a future air war

2026-02-23
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomy software on drones) used in military operations, which fits the definition of an AI system. The use is in development and testing phases, with no reported harm or incident. However, autonomous combat drones have a credible potential to cause harm in future conflicts, such as injury, disruption, or violations of rights, making this a plausible AI Hazard. Since no harm has yet occurred, and the article focuses on demonstration and future capabilities, it does not qualify as an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event. Hence, the classification is AI Hazard.
Thumbnail Image

GA-ASI und USAF demonstrieren mit dem F-22 und dem MQ-20 in einer gemeinsamen Autonomieübung die Zusammenarbeit von bemannten und unbemannten Flugzeugen

2026-02-23
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, as the MQ-20 and F-22 use advanced autonomous software to coordinate and execute tactical maneuvers. The AI system's development and use are central to the event. However, there is no indication of any injury, damage, rights violation, or other harm occurring or having occurred. The article focuses on the demonstration and potential of AI-enabled collaboration between manned and unmanned aircraft, without any mention of malfunction or misuse leading to harm. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system capabilities and military applications.
Thumbnail Image

The Air Force's Newest Pilot Takes Orders (for Now?) but Isn't Human

2026-02-23
PJ Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous drone with onboard sensors and decision-making) used in a military setting for combat tasks. While no harm or incident has occurred yet, the autonomous lethal capabilities and the intended use in combat operations present a credible risk of future harm, including injury or death, disruption of military operations, or other harms. The article focuses on a test mission and the development of these AI-powered systems, highlighting their potential future deployment and risks. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the AI system's capabilities and potential risks, not on responses or governance. Hence, the classification is AI Hazard.
Thumbnail Image

IRW-News: ACCESS Newswire: GA-ASI und USAF demonstrieren mit dem F-22 und dem MQ-20 in einer gemeinsamen Autonomieübung die Zusammenarbeit von bemannten und unbemannten Flugzeugen

2026-02-23
Börse Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous military aircraft coordination and decision-making, fulfilling the AI system involvement criterion. However, the event is a demonstration exercise with no reported harm or malfunction, so it does not meet the criteria for an AI Incident. Although the technology has potential risks, the article does not emphasize plausible future harm or risks that would qualify it as an AI Hazard. The main focus is on showcasing capabilities and integration, which is informative about AI developments but does not report harm or credible risk. Thus, the event is best classified as Complementary Information, providing context on AI system use and advancement in military applications without direct or plausible harm.
Thumbnail Image

F-22 Raptor, MQ-20 drone complete manned-unmanned flight exercise

2026-02-23
Military Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI autonomy software in an unmanned drone performing tactical military tasks under human command. This qualifies as an AI system involved in the event. The event is a demonstration of AI use, not a malfunction or misuse causing harm. No direct or indirect harm is reported. However, the deployment of autonomous drones in combat roles plausibly carries risks of harm in the future, such as accidents, unintended engagements, or escalation of conflict. Thus, the event represents a credible AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

IRW-PRESS: ACCESS Newswire: GA-ASI und USAF demonstrieren mit dem F-22 und dem MQ-20 in einer gemeinsamen Autonomieübung die Zusammenarbeit von bemannten und unbemannten Flugzeugen

2026-02-23
Boersen-Zeitung der WM Gruppe Herausgebergemeinschaft Wertpapier-Mitteilungen, Keppler, Lehmann GmbH & Co. KG (WM Gruppe)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely autonomous software enabling unmanned military aircraft to operate collaboratively with manned jets. Although no harm or incident occurred during the demonstration, the development and deployment of such autonomous military systems inherently carry plausible risks of future harm, such as misuse, accidents, or escalation in conflict scenarios. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI incidents involving harm in the future, even though no direct harm has yet occurred.
Thumbnail Image

An MQ-20 Drone Just Teamed Up with an F-22 for Mock Combat Missions

2026-02-23
Aviation Pros
Why's our monitor labelling this an incident or hazard?
The MQ-20 drone is an AI system with autonomous decision-making capabilities used in military mock combat missions. While no harm or incident has occurred during these tests, the development and integration of autonomous combat drones represent a credible risk of future harm, including injury or violations of human rights in warfare. The article focuses on the demonstration and development of these AI-enabled systems, highlighting their potential use in combat. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

GA-ASI and USAF Demonstrate Manned-Unmanned Teaming With F-22 and MQ-20 In Joint Autonomy Exercise

2026-02-23
sUAS News - The Business of Drones
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI autonomy software in a military exercise, confirming the presence of an AI system. However, there is no indication of any harm, malfunction, or misuse resulting from this event. The demonstration is successful and intended to showcase capabilities, not to report an incident or hazard. Since no harm has occurred or is plausibly imminent from this specific event, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides valuable information about AI system development and military integration, fitting the definition of Complementary Information.
Thumbnail Image

F-22 Fighter, MQ-20 Avenger UAS Demonstrate Manned-Unmanned Teaming - Defense Daily

2026-02-23
Defense Daily
Why's our monitor labelling this an incident or hazard?
The MQ-20 Avenger UAS's autonomous flight and maneuvering based on commands from the F-22 implies the use of AI systems for autonomous navigation and control. However, the article only reports a demonstration without any indication of harm, malfunction, or misuse. There is no mention of injury, disruption, rights violations, or other harms. The event shows the development and use of AI-enabled autonomous systems with potential military applications, which could plausibly lead to future harms if misused or malfunctioning, but no harm has occurred yet.
Thumbnail Image

GA-ASI und USAF demonstrieren mit dem F-22 und dem MQ-20 in einer gemeinsamen Autonomieübung die Zu

2026-02-23
firmenpresse.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as the MQ-20 Avenger and the F-22 use autonomous software to coordinate and execute missions. However, there is no indication of any injury, rights violation, disruption, or other harm caused or occurring due to the AI systems' use or malfunction. The article presents a demonstration of AI capabilities without any reported incident or plausible immediate harm. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about AI development and military applications, contributing to understanding the AI ecosystem and its evolution.
Thumbnail Image

GA-ASI and USAF Demonstrate Manned-Unmanned Teaming With F-22 and MQ-20 In Joint Autonomy Exercise

2026-02-23
Northern Ireland News
Why's our monitor labelling this an incident or hazard?
The article details a successful demonstration of AI-enabled autonomous systems working in coordination with human pilots, highlighting technological capabilities and potential military applications. There is no mention or implication of injury, rights violations, disruption, or any harm caused or likely to be caused by the AI systems. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system development and integration in the defense sector.
Thumbnail Image

GA-ASI und USAF demonstrieren mit dem F-22 und dem MQ-20 in einer gemeinsamen Autonomieübung die Zusammenarbeit von bemannten und unbemannten Flugzeugen

2026-02-23
handelsmeldungen.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based autonomous systems in military drones coordinating with manned aircraft. Although no harm or incident occurred during the demonstration, the nature of the AI system's development and use in autonomous combat air patrols implies a credible risk of future harm, such as injury or disruption in military contexts. The event does not describe any realized harm or malfunction, so it is not an AI Incident. It is not merely complementary information because the focus is on the demonstration of autonomous AI capabilities with inherent risks. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Kampfjet mit Drohnen-Begleitung: Air Force testet F-22 mit "loyalem Wingman"

2026-02-25
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomy software enabling drones to perform missions and make decisions) being tested in a military context. However, the article does not report any actual harm or incident resulting from the use or malfunction of these AI systems. Instead, it describes a demonstration and development phase aimed at enhancing combat capabilities. While the deployment of autonomous combat drones carries potential risks and hazards, this specific event is a test and demonstration without any realized harm or incident. Therefore, it qualifies as an AI Hazard because the development and use of these AI-enabled drones could plausibly lead to future AI incidents involving harm in warfare, but no harm has yet occurred in this described event.
Thumbnail Image

RaillyNews - F-22 Raptor and MQ-20 Avenger Conduct Joint Exercise

2026-02-24
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous drones with machine learning and sensor fusion) actively used in a military context, fulfilling the AI System criterion. However, the article solely reports a successful demonstration without any harm or malfunction. Since no direct or indirect harm has occurred, and no plausible future harm is explicitly indicated, this event does not qualify as an AI Incident or AI Hazard. The article provides complementary information about AI advancements and military integration, fitting the definition of Complementary Information as it enhances understanding of AI's evolving role in defense without reporting harm or risk.