
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Airbus successfully tested AI-driven autonomous control of multiple drones for aerial refueling, and China tested an AI-enabled hypersonic glide vehicle, raising US security concerns. While no harm has occurred, these developments highlight the potential for future AI-related military incidents and strategic instability.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and cooperative control algorithms to autonomously guide and control multiple drones during air-to-air refueling tests. While the event is a successful test and no harm or malfunction is reported, the development and use of such autonomous AI systems in military aviation have plausible potential to lead to harms such as accidents, operational failures, or misuse in future deployments. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no actual harm has occurred yet.[AI generated]