
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Ukrainian company OSIRIS AI has developed the UEB-1 interceptor drone, which uses artificial intelligence for autonomous target prediction, tracking, and interception of high-speed aerial threats. Publicly demonstrated in Düsseldorf, the AI-enabled drone poses potential risks if deployed in military or security contexts, though no harm has yet occurred.[AI generated]
Why's our monitor labelling this an incident or hazard?
The drone is explicitly described as using artificial intelligence for target prediction and tracking, qualifying it as an AI system. Its development and intended use for interception and potential combat roles indicate a credible risk of causing harm, such as damage to property or escalation of conflict, even though no incident of harm is reported yet. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving harm due to the AI system's use in military operations. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the development of a potentially harmful AI system.[AI generated]