US and Japan Collaborate on AI-Powered Autonomous Combat Drones

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Air Force launched its Collaborative Combat Aircraft program, awarding Boeing, Lockheed Martin, Northrop Grumman, General Atomics and Anduril contracts to develop autonomous 'wingmen' drones alongside manned fighters. Secretary Frank Kendall plans to include Japan in the project. The AI-driven drones pose future risks inherent to autonomous weapon systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and use of AI systems (autonomous aircraft) with military applications. Although no incident or harm has yet occurred, the nature of the AI system and its intended use in combat drones presents a credible risk of future harm. Therefore, this qualifies as an AI Hazard due to the plausible potential for harm stemming from the AI system's development and deployment.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

U.S. military eyes Japan's participation in autonomous aircraft program

2024-01-25
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (autonomous aircraft) with military applications. Although no incident or harm has yet occurred, the nature of the AI system and its intended use in combat drones presents a credible risk of future harm. Therefore, this qualifies as an AI Hazard due to the plausible potential for harm stemming from the AI system's development and deployment.
Thumbnail Image

Anduril Is Helping The Air Force To Develop Its Loyal Wingman Drone

2024-01-25
The Drive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled autonomous drones being developed for military use, which fits the definition of an AI system. Although no harm has yet occurred, the nature of these systems—autonomous combat drones—presents a plausible risk of future harm (injury, disruption, or violations of rights) if deployed in conflict. The article focuses on the development and contracting process rather than any incident or harm, so it is not an AI Incident. It is not merely complementary information because the main focus is on the development of potentially hazardous AI systems rather than responses or updates to past events. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Japan 'Hot Partner' To Join 6th-Gen NDAD Fighter Jet's Loyal Wingman 'CCA' Program - USAF Secretary

2024-01-26
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous combat drones (Collaborative Combat Aircraft) that are being developed and planned for future deployment. The involvement is in the development and intended use of these AI systems. No direct or indirect harm has yet occurred, but the article clearly outlines the plausible future risks associated with deploying AI-driven autonomous military drones, including escalation of conflict and strategic military risks. This fits the definition of an AI Hazard, as the event describes circumstances where AI system development and use could plausibly lead to significant harm, but no incident has yet materialized. The article does not focus on responses, mitigation, or updates to past incidents, so it is not Complementary Information. It is not unrelated, as AI systems are central to the described event.
Thumbnail Image

Anduril Chosen for Fighter-Like Drone Development - Orange County Business Journal

2024-01-26
OCBJ
Why's our monitor labelling this an incident or hazard?
The event involves the development of AI-enabled autonomous drones with potential military applications, specifically as robotic wingmen in combat scenarios. While no harm has yet occurred, the nature of these autonomous combat drones carries a credible risk of future harm, including injury, disruption, or violations of rights due to their autonomous weapon capabilities. Therefore, this event represents an AI Hazard, as the development and deployment of such AI systems could plausibly lead to significant harms in the future.
Thumbnail Image

US to Include Japan in Future Drone Wingman Project

2024-01-29
The Defense Post
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI-enabled autonomous combat drones, which are military AI systems with significant potential for harm. Although no incident or harm has occurred yet, the project could plausibly lead to AI incidents involving injury, disruption, or other harms due to the nature of autonomous weapon systems. Therefore, this event qualifies as an AI Hazard under the framework, as it describes credible future risks stemming from the development and deployment of AI-powered military drones.
Thumbnail Image

US Air Force enlists top five contenders for CCA wingmen programme: here are the details

2024-01-26
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous wingmen platforms) under development for military use, which inherently carry risks of harm if deployed. Since no harm has yet occurred and the event concerns the awarding of contracts and development, it constitutes a plausible future risk rather than an actual incident. Therefore, this qualifies as an AI Hazard due to the credible potential for harm from autonomous combat AI systems.
Thumbnail Image

US and Japan Eye Joint Development of AI-Driven Combat Drones

2024-01-27
TS2 SPACE
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (autonomous combat drones) with potential military applications. However, no actual harm, malfunction, or incident has occurred yet. The article focuses on the collaboration and strategic planning stage, indicating a plausible future risk of harm due to the nature of autonomous weapons development. Therefore, this qualifies as an AI Hazard, as the development and deployment of such AI-driven combat drones could plausibly lead to harms such as injury, disruption, or violations of rights in the future.
Thumbnail Image

Japan and the US to Collaborate on Next-Gen Autonomous Combat Drones

2024-01-27
TS2 SPACE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven autonomous drones being developed for combat purposes, which qualifies as AI systems. However, the event describes a planned collaboration and development effort without any realized harm or malfunction. Given the nature of autonomous combat drones and their potential for causing harm if deployed or misused, this development plausibly could lead to AI incidents in the future. Therefore, it fits the definition of an AI Hazard, as it involves the development and intended use of AI systems that could plausibly lead to harm, but no harm has yet occurred or been reported.
Thumbnail Image

Air Force Selects 5 Companies to Build Autonomous Collaborative Combat Aircraft - ExecutiveBiz

2024-01-29
ExecutiveBiz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous unmanned aerial vehicles designed for combat support roles, which involve AI systems for autonomous operation and coordination. Although no incident or harm has been reported yet, the nature of these AI systems—autonomous combat drones—carries credible risks of future harm, such as unintended engagements, escalation of conflict, or misuse. Therefore, this event qualifies as an AI Hazard due to the plausible future harms associated with the development and deployment of autonomous weapon systems.