AI-Enabled Military Systems Spark Strategic Concerns and Future Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Airbus successfully tested AI-driven autonomous control of multiple drones for aerial refueling, and China tested an AI-enabled hypersonic glide vehicle, raising US security concerns. While no harm has occurred, these developments highlight the potential for future AI-related military incidents and strategic instability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI and cooperative control algorithms to autonomously guide and control multiple drones during air-to-air refueling tests. While the event is a successful test and no harm or malfunction is reported, the development and use of such autonomous AI systems in military aviation have plausible potential to lead to harms such as accidents, operational failures, or misuse in future deployments. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no actual harm has occurred yet.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityDemocracy & human autonomyRespect of human rightsRobustness & digital security

Industries
Government, security, and defence

Severity
AI hazard

AI system task:
Reasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

空中客车的A310 MRTT加油机现在可自主控制多架无人机 - cnBeta.COM 移动版

2023-03-29
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and cooperative control algorithms to autonomously guide and control multiple drones during air-to-air refueling tests. While the event is a successful test and no harm or malfunction is reported, the development and use of such autonomous AI systems in military aviation have plausible potential to lead to harms such as accidents, operational failures, or misuse in future deployments. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no actual harm has occurred yet.
Thumbnail Image

军工行业周报:各国推进军事智能化应用,美提出8860...

2023-03-28
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions military AI as a transformative and strategic technology with risks and lack of transparency, but it does not report any specific event where AI use or malfunction has caused harm or disruption. The US defense budget increase includes R&D funding, but no direct or indirect harm or plausible future harm from AI systems is described. The discussion is about ongoing development and investment prospects, which fits the category of Complementary Information as it provides context and updates on AI in military applications without describing an incident or hazard.
Thumbnail Image

中国测试"局部轨道飞行器"点燃美国舆论,美军高层炸锅:怎么防

2023-03-28
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI-enabled or autonomous hypersonic weapon system (local orbital hypersonic glide vehicle) whose development and testing have triggered strategic concerns and defense planning by the US military. Although no direct harm or attack has occurred, the article clearly indicates a credible potential for significant harm to national security and military balance, making this a plausible future risk. Therefore, this qualifies as an AI Hazard because the AI system's development and use could plausibly lead to an AI Incident involving harm to critical infrastructure or national security, but no actual harm has yet materialized.