
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Chinese military engineers have developed AI systems for autonomous drones and robotic weapons that mimic animal behaviors, such as hawks and doves, to enhance combat effectiveness. These AI-controlled swarms, with minimal human oversight, are being actively tested and deployed, raising concerns about autonomous lethal decision-making and potential harm in warfare.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in autonomous weapons and drone swarms by the Chinese military, which are designed to carry out lethal operations with minimal human input. This clearly involves AI system development and use. The potential and actual deployment of such systems in warfare inherently involves harm to persons and communities, fulfilling the criteria for an AI Incident. Although some described systems are in development or procurement stages, the article indicates that some AI-enabled weapons have been demonstrated and are being actively pursued for battlefield use, implying realized or imminent harm. Hence, this is not merely a hazard or complementary information but an AI Incident due to the direct link to harm through autonomous lethal military applications.[AI generated]