Russian V2U AI-Powered Strike Drone Deployed in Ukraine

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ukrainian intelligence reports that Russia is deploying the V2U attack drone, which uses artificial intelligence for autonomous target search and selection, in the Sumy region. The drone relies on computer vision and NVIDIA Jetson Orin chips, enabling lethal autonomous operations and raising concerns about AI-driven harm in active conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The V2U drone is an AI system as it uses AI-based targeting and image recognition software for autonomous target selection. Its deployment in active conflict and use by Russian forces to select targets directly leads to harm (injury or death) to persons, fulfilling the criteria for an AI Incident. The article details the AI system's use, not just potential or future harm, so it is not a hazard. It is not merely complementary information since the focus is on the AI system's operational use causing harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Ukraine finds NVIDIA, Sony parts in Russian AI-powered drone

2025-06-09
Defence Blog
Why's our monitor labelling this an incident or hazard?
The V2U drone is an AI system as it uses AI-based targeting and image recognition software for autonomous target selection. Its deployment in active conflict and use by Russian forces to select targets directly leads to harm (injury or death) to persons, fulfilling the criteria for an AI Incident. The article details the AI system's use, not just potential or future harm, so it is not a hazard. It is not merely complementary information since the focus is on the AI system's operational use causing harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Ukraine Unpacks 'Russian' AI Drone: Foreign Tech, Chinese Parts

2025-06-09
KyivPost
Why's our monitor labelling this an incident or hazard?
The article involves AI-related technology discussion, specifically about AI guidance in drones, but it explicitly states that current drones do not use AI for control. There is no reported harm or plausible imminent harm caused by AI systems. The content mainly clarifies misinformation and provides technical analysis, which fits the definition of Complementary Information. There is no direct or indirect harm caused by AI, nor a credible risk of future harm from AI use described in the article. Hence, it is not an AI Incident or AI Hazard.
Thumbnail Image

Russia's V2U drone uses AI for autonomous strikes in Ukraine's Sumy Oblast

2025-06-09
Euromaidan Press
Why's our monitor labelling this an incident or hazard?
The V2U drone employs AI for autonomous target selection and engagement, which directly leads to harm in the context of military strikes in Ukraine. The AI system's autonomous operation in lethal targeting constitutes direct involvement in causing injury or harm to persons, meeting the definition of an AI Incident. The disclosure of its technical specifications confirms the AI system's role in the harm caused by the drone's autonomous strikes.
Thumbnail Image

Russia's new V2U AI drone hunts Ukraine's best weapons -- so far, it is unjammable

2025-06-11
Euromaidan Press
Why's our monitor labelling this an incident or hazard?
The V2U drone is an AI system capable of autonomous navigation and attack, as described in the article. Its deployment in active combat has directly led to destruction of Ukrainian military assets, which is harm to property and communities. The AI system's role is pivotal in enabling autonomous targeting and attack without human control, increasing the lethality and risk. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in a real-world scenario.
Thumbnail Image

Selects targets using AI: DIU publishes data on new Russian V2U strike UAV. PHOTO

2025-06-09
censor.net
Why's our monitor labelling this an incident or hazard?
The UAV uses AI for autonomous target selection, which is a direct use of AI in a military weapon system. The deployment of such a system in an active conflict zone implies direct harm to people and property. The AI system's role is pivotal in enabling autonomous targeting, which can lead to injury or death and destruction, fulfilling the criteria for an AI Incident. The article reports on the system's active use, not just potential or future risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Ukraine Reveals Russian AI-Driven V2U Strike Drone Details - Oj

2025-06-09
odessa-journal.com
Why's our monitor labelling this an incident or hazard?
The V2U drone is an AI system as it uses artificial intelligence for autonomous target search and selection. Its deployment in active military operations means its use has directly or indirectly led to harm (or at least the credible risk of harm) to persons and property, fulfilling the criteria for an AI Incident. The article reports on the drone's active use, not just potential or future use, indicating realized harm or imminent risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Russian AI killer drones are powered with Nvidia chips, Ukrainian Intelligence says

2025-06-10
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Russian V2U drone uses AI for autonomous target search and selection, which is a clear AI system involvement. The drone is actively deployed in a military conflict, which inherently involves harm to people and communities. The AI system's use in autonomous lethal operations directly leads to harm, fulfilling the criteria for an AI Incident. Although the article focuses on the components and supply chain, the key point is the AI-powered autonomous attack capability causing real harm in an ongoing conflict.