
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Scout AI, a Sunnyvale-based defense tech startup, raised $100 million to accelerate development of Fury, an AI foundation model for unmanned warfare. The system aims to enable autonomous military operations across air, land, sea, and space, presenting significant risks of harm due to its intended use in lethal contexts.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Fury) for autonomous military operations, which clearly fits the definition of an AI system. The AI system is intended for unmanned warfare, which inherently carries risks of harm to people, property, and communities. Although no actual harm or incident is reported, the nature of the AI system and its intended use in autonomous strike missions plausibly could lead to AI incidents involving injury, death, or other significant harms. Therefore, this event qualifies as an AI Hazard because it describes the development and deployment of an AI system with credible potential to cause harm, but no harm has yet been reported or occurred.[AI generated]