U.S. Air Force Begins Ground Testing of AI-Powered YFQ-42A Stealth Combat Drone

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The U.S. Air Force and General Atomics have unveiled the YFQ-42A, an AI-enabled stealth unmanned combat aircraft, and commenced ground testing as part of the Collaborative Combat Aircraft program. Designed to fly alongside F-22s and F-35s, the drone marks a major advance in lethal autonomous capabilities with future battlefield risk.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and testing of an AI-enabled autonomous combat drone system (an AI System) that is described as "truly lethal" and intended for use in military operations. Although no harm has yet occurred, the nature of the system and its intended use in warfare plausibly could lead to significant harms including injury or death, disruption of critical infrastructure, and harm to communities. The article does not report any actual incident or harm caused by the drone so far, but the potential for harm is clear and credible. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the development and deployment of lethal autonomous weapons systems.[AI generated]
AI principles
SafetyRobustness & digital securityRespect of human rightsAccountabilityTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rightsPsychological

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality control

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Air Force 'Loyal Wingmen' YFQ-42A Drones Have Begun Ground Testing

2025-05-21
The National Interest
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of an AI-enabled autonomous combat drone system (an AI System) that is described as "truly lethal" and intended for use in military operations. Although no harm has yet occurred, the nature of the system and its intended use in warfare plausibly could lead to significant harms including injury or death, disruption of critical infrastructure, and harm to communities. The article does not report any actual incident or harm caused by the drone so far, but the potential for harm is clear and credible. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the development and deployment of lethal autonomous weapons systems.
Thumbnail Image

General Atomics begin ground testing on US Air Force's unmanned fighter

2025-05-22
DroneDJ
Why's our monitor labelling this an incident or hazard?
The YFQ-42A is an AI system (an unmanned fighter jet with autonomous capabilities) under development and testing. While it has significant potential for harm in military contexts, the article only reports on ground testing and development stages without any realized harm or incident. Therefore, it represents a plausible future risk (hazard) rather than an incident. The event fits the definition of an AI Hazard because the development and potential deployment of autonomous combat aircraft could plausibly lead to injury, disruption, or other harms in the future.
Thumbnail Image

U.S. Air Force's YFQ-42A stealth fighter breaks cover at last

2025-05-20
Bulgarian Military Industry Review
Why's our monitor labelling this an incident or hazard?
The YFQ-42A is an AI system as it performs semi-autonomous tasks involving target recognition and weapons deployment. The article does not report any actual harm or incidents caused by the system yet, as it is still in testing and has not flown. However, the system is designed for combat roles where its AI capabilities could plausibly lead to injury, death, or other harms in future military operations. The article highlights the geopolitical tensions and the potential for the drone to be used in contested environments, which supports the plausibility of future harm. Since no realized harm is reported, this is not an AI Incident. The event is not merely complementary information because it focuses on the unveiling and description of a new AI-enabled military system with clear potential for harm. Therefore, the correct classification is AI Hazard.
Thumbnail Image

YFQ-42A uncrewed test aircraft enters ground testing phase at General Atomics - Military Embedded Systems

2025-05-20
militaryembedded.com
Why's our monitor labelling this an incident or hazard?
The YFQ-42A is an AI-enabled uncrewed aircraft system undergoing ground testing, indicating AI system involvement. However, there is no mention of any harm, malfunction, or risk of harm caused or potentially caused by the AI system. The article is a factual update on the progress of the AI system's development and testing, without any indication of incidents or hazards. Therefore, it fits the category of Complementary Information, providing context and updates on AI system development without reporting harm or risk.
Thumbnail Image

Pilotless Fighter Jets Are Coming, USAF Starts Testing Two of Them

2025-05-02
autoevolution
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of semi-autonomous fighter drones undergoing testing and development. No actual harm or incident has occurred yet, but the deployment of AI-enabled combat drones plausibly could lead to significant harms including injury or violations of human rights in future combat scenarios. Therefore, this event qualifies as an AI Hazard due to the credible potential for harm stemming from the AI system's intended use in military operations.
Thumbnail Image

Air Force Begins Testing Loyal Wingman Drones to Fly Alongside NGAD

2025-05-02
The National Interest
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as the drones use AI and machine learning for autonomous operation in combat roles. The drones are currently in testing, so no direct harm has occurred yet, but their intended use in warfare and the possibility of being sacrificed in combat clearly imply plausible future harm to persons and military operations. This fits the definition of an AI Hazard, as the development and deployment of AI-enabled autonomous weapons could plausibly lead to incidents involving injury, disruption, or other significant harms. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the testing and potential impact of these AI systems.
Thumbnail Image

Autonomous fighters will fly less often than crewed jets while ...

2025-05-01
Flight Global
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (semi-autonomous combat aircraft) under development by the US Air Force. Although no harm has yet occurred, the article highlights the intended operational deployment of these AI-enabled autonomous fighters, which could plausibly lead to significant harms such as injury, disruption, or other military-related damages. Therefore, this situation fits the definition of an AI Hazard, as it concerns the plausible future harm from the use of AI systems in autonomous combat jets. There is no indication of an actual incident or realized harm, nor is the article primarily about responses or complementary information, so AI Hazard is the appropriate classification.
Thumbnail Image

DAF begins ground testing for Collaborative Combat Aircraft, selects Beale AFB as the preferred location for aircraft readiness unit

2025-05-01
GlobalSecurity.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous systems integration in combat aircraft, which qualifies as AI systems. The event is about ground testing and preparation for deployment, with no indication of harm or malfunction yet. However, the nature of the AI system—semi-autonomous combat aircraft—implies a credible risk of future harm, such as injury or disruption in military contexts. Since no harm has occurred yet but plausible future harm exists, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

US Air Force begins testing of uncrewed combat jets

2025-05-01
Defence Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous integration and control interfaces, indicating the involvement of AI systems in the uncrewed combat jets. Although no harm has occurred yet, the nature of these AI systems—military semi-autonomous combat aircraft—carries inherent risks of injury, disruption, or other harms if deployed or malfunctioning. The event is about ground testing and preparation for flight trials, so no incident has happened, but the plausible future harm from these AI systems justifies classification as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are clearly involved, and it is not complementary information since it does not update or respond to a prior incident but reports on ongoing development with potential risks.
Thumbnail Image

YFQ-42 and YFQ-44 Collaborative Combat Aircraft prototypes enter ground testing

2025-05-05
Janes.com
Why's our monitor labelling this an incident or hazard?
The YFQ-42A and YFQ-44A aircraft prototypes incorporate autonomy integration, indicating AI system involvement. The event concerns their development and testing phase, with no reported incidents of harm yet. However, autonomous combat aircraft have a high potential for causing injury, disruption, or violations of human rights if deployed. The article highlights ongoing development and preparation for flight testing, implying plausible future harm. Hence, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

USAF begins CCA ground testing, selects Beale AFB as the preferred location for aircraft readiness unit

2025-05-05
The Aviation Geek Club
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven autonomy integration in the CCA program, confirming AI system involvement. The event concerns the development and testing phase, with no reported incidents or harms to people, infrastructure, rights, or communities. However, the CCA's intended use as a semi-autonomous combat aircraft capable of lethal missions inherently carries a credible risk of future harm, including injury, violation of rights, or disruption in conflict scenarios. Since no harm has yet occurred but plausible future harm is credible, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

USAF advances autonomous combat teaming with CCA

2025-05-02
Air Force Technology
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous combat aircraft with AI-driven autonomy features. The article details their development and testing but does not report any realized harm or incidents. However, given the military application and autonomous combat role, these systems could plausibly lead to harms such as injury, disruption, or violations of rights if deployed or malfunctioning. The article's focus on ground testing and preparation for flight trials indicates potential future risks rather than current incidents. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

USAF Tests Combat Aircraft, Chooses Beale AFB Base

2025-05-01
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely autonomous combat aircraft integrating AI for autonomy and control. The article focuses on development and testing phases without any reported harm or malfunction. Given the military application and autonomous nature, there is a credible risk that these systems could lead to harm in the future, such as injury, property damage, or violations of rights during combat operations. Since no harm has yet occurred, but plausible future harm exists, the classification as an AI Hazard is appropriate. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated as the article centers on the AI system's development and its implications.
Thumbnail Image

US Begins Ground Tests for Collaborative Combat Aircraft Drones

2025-05-02
The Defense Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the autonomy integration of combat drones. No actual harm or incident is reported, but the nature of the AI system—autonomous combat drones—carries a credible risk of future harm, including injury or violations of human rights. The article focuses on development and testing, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

USAF begins ground testing for Collaborative Combat Aircraft

2025-05-02
GameReactor
Why's our monitor labelling this an incident or hazard?
The YFQ-44A is part of the Collaborative Combat Aircraft program, which involves autonomous systems integration, implying the use of AI for autonomy. While the event is about testing and development, no harm has yet occurred. However, the development of autonomous combat aircraft with AI capabilities plausibly leads to future harms such as injury, disruption, or violations of rights if deployed or misused. Therefore, this event qualifies as an AI Hazard due to the credible risk associated with autonomous weapon systems under development.
Thumbnail Image

Ground tests for CCA programme underway with flight tests to start mid-2025, says USAF

2025-05-01
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomy integration in military aircraft prototypes, indicating the involvement of AI systems. The event concerns the development and testing phase, with no reported harm yet. However, autonomous combat aircraft have a high potential for causing injury, disruption, or violations of human rights if misused or malfunctioning. Thus, the event plausibly leads to future AI incidents, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

US Air Force's Autonomous UAV in Ground Tests

2025-05-02
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (semi-autonomous combat drones) whose development and testing are described. No actual harm or incident has occurred yet, so it is not an AI Incident. However, the nature of these autonomous weapon systems and their intended use in combat plausibly pose risks of harm (injury, violation of rights, disruption) in the future. The article focuses on the progress and plans for deployment, indicating a credible potential for future harm, which fits the definition of an AI Hazard. There is no indication that this is merely complementary information or unrelated news, as the AI system's development and potential for harm are central to the report.
Thumbnail Image

GA-ASI, Anduril's Drones Under Air Force CCA Increment 1 Evaluation

2025-05-02
Executive Gov
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomy integration in unmanned combat aircraft prototypes, indicating AI system involvement. No actual harm or incident is reported, but the nature of autonomous military drones implies a credible risk of future harm if these systems malfunction, are misused, or deployed in conflict. Hence, the event is best classified as an AI Hazard, reflecting plausible future harm from AI system development and use in military applications.
Thumbnail Image

Air Force announcement describes purpose of new Beale mission

2025-05-02
Appeal-Democrat
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous combat aircraft with AI-driven capabilities. The article focuses on development and testing phases without any reported harm or malfunction. However, the nature of the AI system—autonomous military aircraft capable of lethal operations—implies a credible risk of future harm, including injury, disruption, or violations of human rights in conflict scenarios. Since no harm has yet occurred but plausible future harm is credible, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Air Force begins ground testing for CCA program

2025-05-01
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomy integration in the prototype drones, which qualifies as AI systems. The event is about ground testing during development, with no current harm reported. However, the intended use of these AI-enabled autonomous combat drones as missile carriers in military operations presents a credible risk of harm (injury, disruption, or other significant harms). Hence, it fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to an AI Incident in the future. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system and plausible future harm are central to the event.
Thumbnail Image

Air Force starts ground testing Anduril collaborative combat aircraft

2025-05-01
Defense News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of autonomous software enabling the drones to fly themselves with minimal pilot direction and perform combat-related missions such as airstrikes and electronic warfare. This clearly involves AI systems. Although the drones are still in testing and no harm has been reported, the nature of the system—semi-autonomous combat aircraft—implies a credible risk of future harm including injury or death, disruption, and violation of rights in warfare contexts. The event is about development and testing, not about an incident causing harm yet, so it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the start of testing a system with high potential for harm, not on responses or ecosystem context. Hence, the classification is AI Hazard.