US-China AI Drone Swarm Arms Race Raises Global Security Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

US and Chinese militaries are developing AI-powered drone swarms capable of autonomous, coordinated attacks, raising concerns about future warfare escalation and global instability. The technology's ease of proliferation could enable rogue actors to acquire lethal autonomous weapons, posing significant risks despite no reported incidents yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI-equipped drone swarms) whose development and intended use in warfare pose a credible risk of causing significant harm (death, conflict, instability). Since the article discusses the current development and arms race without reporting actual harm yet, it fits the definition of an AI Hazard. The potential for these AI systems to cause injury, disruption, and harm to communities is clearly articulated and plausible, making this a classic example of an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyAccountabilityRobustness & digital securityRespect of human rightsDemocracy & human autonomyTransparency & explainability

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rights

Severity
AI hazard

Business function:
Research and developmentICT management and information security

AI system task:
Goal-driven organisationRecognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Deadlier Than Nukes? US, China Rush For "Inevitable" AI Drone Swarms To Prepare For "New" Warfare - News18

2024-04-13
News18
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-equipped drone swarms) whose development and intended use in warfare pose a credible risk of causing significant harm (death, conflict, instability). Since the article discusses the current development and arms race without reporting actual harm yet, it fits the definition of an AI Hazard. The potential for these AI systems to cause injury, disruption, and harm to communities is clearly articulated and plausible, making this a classic example of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-enabled drone swarms) and their development and intended use in military conflict. The article does not report any realized harm or incident but discusses the plausible future harms that could arise from the deployment and proliferation of these AI systems, including increased risk of warfare and destabilization. Therefore, this qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving harm to people and communities, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

U.S.-China competition to field military drone swarms could fuel global arms race

2024-04-12
Portland Press Herald
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems explicitly—AI-enabled drone swarms with autonomous capabilities. It focuses on the development and potential use of these systems in military conflict, which could plausibly lead to harms such as conflict escalation, instability, and violations of human rights. Although no direct harm has yet materialized, the credible risk of future harm from these AI systems is central to the article. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities and international security.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
The Columbian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—AI-enabled drone swarms with autonomous coordination and decision-making capabilities. While no specific harm has yet occurred, the article clearly outlines the plausible future harm these AI systems could cause, including escalation of military conflict, harm to people, and destabilization of global security. The development and deployment of such AI military systems with lethal capabilities and autonomous functions constitute a credible AI Hazard due to the significant risk of future harm. There is no indication that an actual incident (harm realized) has occurred yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the risks and potential harms of these AI systems.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
Albuquerque Journal
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems in the form of military drones equipped with AI for autonomous coordination and decision-making. The event concerns the development and potential use of these AI systems in warfare, which could plausibly lead to harms including injury, death, and geopolitical instability. Since the article does not report any realized harm or incident but focuses on the potential future risks of these AI-enabled drone swarms, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
The Bakersfield Californian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as military drones equipped with AI capable of autonomous coordination and decision-making. Although no incident of harm has been reported, the development and potential use of such AI-enabled weapon systems plausibly could lead to significant harms, including injury or death and geopolitical instability. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm stemming from the AI systems' intended use in warfare.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
Daily Journal
Why's our monitor labelling this an incident or hazard?
The article discusses the intended use and development of AI systems in military drone swarms capable of autonomous coordinated action. These systems involve AI for real-time decision-making and autonomous mission adjustments. While no incident or harm has been reported, the potential for these AI-enabled weapons to cause injury, escalate conflicts, or disrupt critical infrastructure is credible and significant. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

US-China competition to field military drone swarms could fuel global arms race

2024-04-12
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The drones described are equipped with AI enabling autonomous or semi-autonomous coordinated behavior in military operations. This involves the use of AI systems in a context that could lead to injury or harm to people and disruption of critical infrastructure. Since the article focuses on the preparation and potential use of these AI-enabled drone swarms without reporting actual harm yet, it fits the definition of an AI Hazard, as the development and deployment of such systems could plausibly lead to AI incidents involving significant harm.
Thumbnail Image

US and Chinese military planners prepare for a new kind of war - ExBulletin

2024-04-13
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and potential use of AI-powered drone swarms in military conflicts, which are AI systems capable of autonomous or semi-autonomous coordinated actions. The discussion centers on the plausible future harms these systems could cause, including escalation of conflict, instability, and lethal outcomes. No actual incident of harm is reported, but the credible risk of such harm is emphasized, fitting the definition of an AI Hazard. The article also mentions efforts and challenges in governance and arms control, but the primary focus is on the potential for harm rather than responses or updates, so it is not Complementary Information.
Thumbnail Image

US-China competition is gearing up for a new kind of warfare - Taipei Times

2024-04-19
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous or semi-autonomous drone swarms used for military purposes. It details their development, intended use, and the strategic competition between major powers. Although no actual incident of harm caused by these AI systems is reported, the article clearly outlines the plausible future harms such as increased global instability, conflict escalation, and proliferation risks. This fits the definition of an AI Hazard, where the AI system's development and intended use could plausibly lead to an AI Incident. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the risk and potential consequences of these AI systems rather than updates or responses to past incidents. It is also not unrelated, as the AI system involvement and potential harms are central to the narrative.
Thumbnail Image

Drone swarm warfare drives new arms race in US, China

2024-04-20
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-enabled drone swarms capable of autonomous coordinated behavior. The event is about the development and deployment of these systems and the associated risks of their use in warfare. While it discusses potential harms and risks (e.g., escalation of conflict, proliferation to rogue actors), no actual harm or incident has been reported as having occurred. Therefore, this qualifies as an AI Hazard, as the AI systems could plausibly lead to significant harm in the future, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

Drone swarm warfare drives new arms race in US, China

2024-04-20
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems (drone swarms with autonomous coordination capabilities) being developed and deployed for military use. It does not report any realized harm or incident but emphasizes the credible risk that these AI-enabled weapons could lead to conflict, instability, and misuse globally. This fits the definition of an AI Hazard, as the development and potential use of these AI systems could plausibly lead to significant harms such as conflict escalation and violations of human rights. There is no indication of a current AI Incident or complementary information about responses or mitigation, nor is it unrelated to AI.
Thumbnail Image

Drone swarm warfare drives new arms race in US, China

2024-04-20
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-enabled drone swarms with autonomous capabilities. The event is about the development and deployment of these systems and the associated risks of their use in warfare. While it outlines credible and significant potential harms (e.g., conflict escalation, proliferation to rogue actors), no actual harm or incident has been reported yet. Therefore, this qualifies as an AI Hazard because the AI systems could plausibly lead to incidents of harm in the future, but no realized harm is described in the article.
Thumbnail Image

Drone swarm warfare drives new arms race in US, China

2024-04-20
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems embedded in drone swarms capable of autonomous coordination and decision-making in military contexts. Although no direct harm or incident is reported, the article emphasizes the credible risk of these AI-enabled weapons leading to conflict escalation, proliferation to hostile actors, and instability globally. The development and deployment of such AI systems in military drones with autonomous capabilities fit the definition of an AI Hazard, as they could plausibly lead to AI Incidents involving harm to people, communities, and international security. There is no indication of a realized incident or complementary information about responses; thus, the classification is AI Hazard.