
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Anduril Industries' AI-driven drones and autonomous weapons have repeatedly malfunctioned during U.S. military tests and combat deployments, including in Ukraine. Failures include drone crashes, vulnerability to electronic warfare, loss of control of unmanned boats, and a fire caused by an anti-drone system, resulting in operational disruptions and property damage.[AI generated]
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous flight, surveillance, and strike tasks. The crashes are malfunctions of these AI systems, directly leading to loss of property and operational disruption, which fits the definition of harm. The article reports actual incidents of drone failures, not just potential risks, so this is an AI Incident rather than a hazard. The involvement of AI in the drones' autonomous functions and the resulting crashes causing harm to military assets and potentially to broader military effectiveness justify classification as an AI Incident.[AI generated]