Shield AI's V-BAT drones empower Ukraine’s frontline operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Shield AI has opened a Kyiv office with on-ground engineers and mission operators to train Ukraine’s Unmanned Systems Forces on its autonomous V-BAT drones. Resistant to Russian jamming, these AI-powered VTOL UAS perform ISR, deep penetration targeting, and kamikaze strikes in GPS-denied environments, directly enabling lethal operations against Russian forces.[AI generated]

Why's our monitor labelling this an incident or hazard?

The MQ-35A V-BAT drones are AI systems capable of autonomous operation, including target acquisition and engagement. Their deployment in Ukraine's military strategy has resulted in direct harm to human soldiers, as indicated by their use against Russian missile systems and the mention of troops bearing the brunt of new AI-enabled kamikaze drones. This constitutes an AI Incident because the AI system's use has directly led to harm to persons in a conflict zone, fulfilling the criteria for injury or harm to people. The event is not merely a product announcement or potential risk but describes realized harm through AI-enabled military operations.[AI generated]
AI principles
AccountabilityHuman wellbeingRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (death)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

America's killer drone that hammered Russia in Ukraine now heading towards China's neighbourhood, its name is..., capable of...

2025-01-24
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The V-BAT drone is an AI system used in military operations with autonomous capabilities. Its deployment in conflict zones and export to Japan for maritime ISR missions indicate its use in potentially high-risk military contexts. While the article does not report any specific harm or incident caused by the drone, the nature of the AI system and its military application imply a plausible risk of harm, including injury, disruption, or escalation of conflict. Therefore, this event represents an AI Hazard due to the credible potential for the AI system's use to lead to harm in the future, especially given its role in contested environments and military operations.
Thumbnail Image

Shield AI V-BAT Selected as Japan Maritime Self-Defense Force's First Maritime ISR Platform | Taiwan News | Jan. 22, 2025 20:05

2025-01-22
Taiwan News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the V-BAT autonomous drone) whose development and use are described. Although no harm or incident is reported, the deployment of AI-powered autonomous military drones plausibly could lead to harms such as injury, disruption, or violations of rights in future military operations. The article does not report any actual harm or malfunction, so it is not an AI Incident. It is not Complementary Information because it is not updating or responding to a prior incident or hazard, nor is it unrelated since it clearly involves an AI system with potential for harm. Therefore, the classification as an AI Hazard is appropriate.
Thumbnail Image

"Game-Changer" In Ukraine's Military Strategy, Shield AI, Manufacturer Of MQ-35A V-BAT Drone Opens Kyiv Office

2025-01-21
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The MQ-35A V-BAT drones are AI systems capable of autonomous operation, including target acquisition and engagement. Their deployment in Ukraine's military strategy has resulted in direct harm to human soldiers, as indicated by their use against Russian missile systems and the mention of troops bearing the brunt of new AI-enabled kamikaze drones. This constitutes an AI Incident because the AI system's use has directly led to harm to persons in a conflict zone, fulfilling the criteria for injury or harm to people. The event is not merely a product announcement or potential risk but describes realized harm through AI-enabled military operations.
Thumbnail Image

Cutting Edge U.S. Drones That Left Russia 'Bruised & 'Battered' In Ukraine War Set To Operate In Indo-Pacific

2025-01-24
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous drones with AI pilots capable of complex decision-making and coordination in military contexts. The event concerns the sale and planned deployment of these drones to the Japanese navy, highlighting their use in contested environments and potential for autonomous lethal operations. While no direct harm or incident is reported in this new deployment, the nature of the AI system and its intended use in military operations imply a plausible risk of harm, including injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Shield AI V-BAT Selected as Japan Maritime Self-Defense Force's First Maritime ISR Platform

2025-01-22
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous drone with AI-powered autonomy software used for military ISR and strategic targeting. Although no harm or incident is reported, the deployment of such AI-enabled autonomous weapons systems plausibly could lead to harms such as injury, disruption, or violations of rights in future operations. The article focuses on the acquisition and operational deployment of the AI system, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard, reflecting plausible future harm from the use of AI in autonomous military drones.
Thumbnail Image

Shield AI V-BAT Selected as Japan Maritime Self-Defense Force's First Maritime ISR Platform

2025-01-22
AustralianAssociatedPress
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the V-BAT autonomous drone) explicitly described as AI-powered and used for military ISR missions. However, the article focuses on the announcement of its selection and deployment, without any mention of harm, malfunction, misuse, or risks that could plausibly lead to harm. There is no indication of injury, rights violations, infrastructure disruption, or other harms. The article serves to inform about the adoption of an AI system in defense, which is relevant to the AI ecosystem and governance but does not constitute an incident or hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Ukrainians Train on American Drone That Defeats Russian Jamming

2025-01-20
The Defense Post
Why's our monitor labelling this an incident or hazard?
The V-BAT drone is an AI system as it performs autonomous navigation and complex mission tasks such as reconnaissance and targeting, which involve AI decision-making. Its use in active combat operations has directly led to harm by enabling strikes on enemy targets, which involves injury or harm to persons and damage to property. The article explicitly describes the drone's operational deployment and its role in military targeting, indicating realized harm rather than potential harm. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of an AI system in causing harm in a conflict context.
Thumbnail Image

Shield AI begins training Ukraine's unmanned systems forces with V-BAT drone - Military Embedded Systems

2025-01-20
militaryembedded.com
Why's our monitor labelling this an incident or hazard?
The V-BAT drone is an AI system as it is an autonomous unmanned aerial vehicle capable of operating in complex environments with electronic warfare and GPS-denied conditions, implying sophisticated AI for navigation and mission execution. The article focuses on training and deployment preparation, with no current harm reported. However, given the military context and the drone's capabilities for targeting enemy forces, the use of such AI systems could plausibly lead to injury or harm to persons, qualifying this as an AI Hazard. There is no indication of an actual incident or harm yet, so it is not an AI Incident. It is not merely complementary information because the focus is on the deployment and training of a potentially harmful AI system in an active conflict zone, which implies plausible future harm.
Thumbnail Image

Japan Maritime Self-Defense Force selects V-BAT UAS for ISR missions - Naval News

2025-01-22
Naval News
Why's our monitor labelling this an incident or hazard?
The V-BAT UAS is an AI-powered autonomous system, so an AI system is involved. However, the article only announces its selection and deployment for ISR missions by the JMSDF, highlighting its capabilities and strategic importance. There is no mention of any harm, malfunction, misuse, or potential risk leading to harm. The article does not describe any AI Incident or AI Hazard but rather provides information about the adoption of an AI system in defense, which fits the definition of Complementary Information as it enhances understanding of AI deployment in military contexts without reporting harm or risk.
Thumbnail Image

Amazon suspends U.S. drone deliveries following crash at testing facility | TechCrunch

2025-01-17
TechCrunch
Why's our monitor labelling this an incident or hazard?
The drones are AI systems as they perform autonomous navigation and decision-making for deliveries. The crashes are malfunctions of these AI systems, and the suspension of operations indicates a direct response to these malfunctions. Although no injury or property damage is explicitly reported, the crashes at testing and commercial sites imply a risk of harm and disruption. Since the AI system's malfunction has directly led to operational disruption and potential safety risks, this qualifies as an AI Incident.
Thumbnail Image

Russia's unjammable drones are causing chaos. A tech firm says it has a fix to help Ukraine fight back.

2025-01-14
Yahoo
Why's our monitor labelling this an incident or hazard?
The fiber-optic drones are AI systems or at least AI-enabled systems used in warfare to conduct precision strikes, causing harm to military personnel and equipment, which fits the definition of harm to persons and property. The article explicitly states that these drones are causing chaos and are a real problem on the battlefield. The Ukrainian company's detection system, while still in testing, is a response to this harm but does not negate the fact that harm is occurring due to the drones' use. Hence, this is an AI Incident due to the realized harm caused by the AI-enabled drones in active conflict.
Thumbnail Image

Startups race to build bigger, better drones to fight bigger, hotter wildfires

2025-01-15
Yahoo
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous drones) used in wildfire fighting, which is a clear AI system involvement. However, no direct or indirect harm resulting from these AI systems is reported. The mention of a drone collision with a firefighting plane is noted but lacks explicit connection to AI malfunction or harm caused by AI. The main narrative centers on the development, testing, and regulatory challenges of these AI systems, indicating potential future impact rather than current incidents. Therefore, the event is best classified as an AI Hazard, as the autonomous drones could plausibly lead to incidents (positive or negative) in the future, but no incident has yet occurred.
Thumbnail Image

Amazon suspends U.S. drone deliveries following crash at testing facility

2025-01-18
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous drones used for delivery. The crashes are directly linked to the malfunction or operational errors of these AI systems, causing harm in the form of disruption to drone delivery operations and potential safety hazards. Even though no physical injury or property damage is explicitly reported, the crashes and suspension of operations represent realized harm related to AI system malfunction. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Startups race to build bigger, better drones to fight bigger, hotter wildfires

2025-01-15
NBC News
Why's our monitor labelling this an incident or hazard?
The drones described are autonomous aerial vehicles likely employing AI for navigation, monitoring, and fire suppression tasks. Their use aims to reduce wildfire damage by faster response times, which is a positive application. There is no mention of any incident, malfunction, or harm caused by these AI systems. The article focuses on the development, deployment, and potential of these AI-enabled drones, as well as regulatory and logistical challenges, but does not report any realized harm or direct risk. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system development and deployment in wildfire management.
Thumbnail Image

Estonian Defense League raising funds from recycling to buy drones

2025-01-16
ERR
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses drones capable of carrying explosive charges and performing swarming attacks, which are described as fully self-guiding, indicating AI system involvement. The drones' intended use in combat to destroy enemy vehicles and infantry directly relates to potential injury and harm to people and property. While no actual harm is reported, the development and funding of such AI-enabled weapon systems present a credible risk of future harm, fitting the definition of an AI Hazard. There is no indication of an actual incident or realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the plausible future harm from AI-enabled military drones.
Thumbnail Image

Your drone is on its way: Amazon set for green light to deliver parcels by air

2025-01-17
inews.co.uk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (delivery drones with autonomous capabilities) and their development and use are being considered and trialed. However, no actual harm or incident has occurred yet. The article highlights potential future use and regulatory changes that could enable such use, which could plausibly lead to incidents in the future but does not describe any current harm or malfunction. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk of harm from expanded drone delivery operations enabled by AI systems.
Thumbnail Image

Fiber-Optic Drones Now Available on Major E-Commerce Sites, Transforming Warfare

2025-01-14
Gadget Review
Why's our monitor labelling this an incident or hazard?
The drones described are equipped with advanced autonomous capabilities likely involving AI systems, given their military-grade and jam-resistant features. The article does not describe any realized harm but emphasizes the potential for misuse and the need for new regulations to prevent such misuse. This aligns with the definition of an AI Hazard, where the development and commercial availability of AI-enabled systems could plausibly lead to significant harms in the future. There is no indication of an actual incident or harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the potential risks posed by these AI systems.
Thumbnail Image

BCSO adds high-tech drones to arsenal

2025-01-17
KOAT 7
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically advanced drones with autonomous and AI-powered features such as real-time video analysis and heat detection. However, there is no indication that the use or deployment of these AI systems has directly or indirectly caused any harm or violation of rights. The article describes the program's expansion and intended positive impacts on law enforcement efficiency and safety. Since no harm has occurred and the article does not suggest plausible future harm, this event is best classified as Complementary Information, providing context on AI adoption and operational enhancements in law enforcement.
Thumbnail Image

Shaping the future of defence: What 2025 holds for the global drone market

2025-01-16
Shephard Media
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems insofar as it discusses autonomous drones and AI-enabled military UAVs. However, it does not describe any actual harm, injury, rights violation, or disruption caused by these AI systems. The focus is on market trends, procurement, and strategic developments, which constitute a credible potential for future harm given the nature of autonomous weapon systems, but no specific incident or hazard event is described. Therefore, the event is best classified as an AI Hazard because it plausibly points to future risks related to the proliferation and use of autonomous drones in military operations, but no direct or indirect harm has yet occurred or been reported.
Thumbnail Image

Ukraine produced over 30,000 bomber drones in 2024, minister says

2025-01-17
The Kyiv Independent
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems or autonomous unmanned aerial vehicles used in combat operations, which directly lead to harm through strikes and military actions. The production and deployment of over 30,000 bomber drones in an active war zone clearly involve AI systems whose use has directly led to harm (injury, death, destruction) in the conflict. Therefore, this event qualifies as an AI Incident due to the direct link between AI system use and harm in warfare.
Thumbnail Image

Shield AI to train Ukrainians on jam-resistant drones

2025-01-16
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-developed drones (V-BAT) used by Ukrainian forces for long-range strikes in a war zone. The drones' AI capabilities include resistance to GPS jamming and electronic warfare, enhancing their operational effectiveness. Their use in targeting enemy military assets directly contributes to harm in the conflict. This constitutes an AI Incident because the AI system's use has directly led to harm (injury or harm to persons/groups in the conflict). Although the article focuses on training and capability enhancement, the operational use of these AI drones in warfare is ongoing and causing harm, meeting the criteria for an AI Incident.
Thumbnail Image

Drone War in Ukraine: Bombers, Kamikaze Strikes and Dogfights in the Sky

2025-01-15
19FortyFive
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-driven interceptors and autonomous drone fighters used in combat, which are AI systems by definition. Their deployment has directly led to harm in the form of casualties and disruption of military operations, fulfilling the criteria for an AI Incident. The harms include injury or harm to persons (casualties), disruption of critical military infrastructure and operations, and harm to communities affected by the conflict. The detailed description of drone dogfights, kamikaze strikes, and AI-enabled tactics confirms the AI system's pivotal role in causing these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Shield AI Opens In-Country Office for Operational Support to V-BAT System

2025-01-20
KyivPost
Why's our monitor labelling this an incident or hazard?
The V-BAT UAV is an AI system with autonomous capabilities used in military operations. While its deployment in combat and its role in targeting enemy assets imply potential for harm, the article does not describe any realized harm or malfunction caused by the AI system. Instead, it focuses on operational support, strategic use, and investment. Therefore, this event does not meet the criteria for an AI Incident (no direct or indirect harm reported) nor an AI Hazard (no plausible future harm beyond the known military use). It is best classified as Complementary Information, providing context on AI system deployment and support in a conflict environment without reporting new harm or hazard.
Thumbnail Image

Shield AI Starts Training with Ukraine's Unmanned Systems Forces, Establishes Local Presence in Ukraine

2025-01-16
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Shield AI's autonomous drone system (V-BAT) equipped with AI pilot technology being used in Ukraine's military operations. The system performs strategic targeting missions in GPS- and communications-denied environments, which implies autonomous decision-making capabilities. The deployment and use of these AI systems in active warfare directly contribute to harm (injury, death, destruction) as part of military conflict. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through military operations.
Thumbnail Image

Shield AI Opens Office in Kyiv to Train Ukrainians on North Texas-Developed V‑BAT Drones

2025-01-16
Dallas Innovates
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Shield AI's autonomy software integrated into V-BAT drones used by Ukrainian forces in active combat against Russian forces. The AI system is used for strategic targeting and operations in contested environments, which directly contributes to harm in the war. The involvement of AI in lethal autonomous or semi-autonomous military drones that have been deployed and used in combat meets the criteria for an AI Incident due to direct harm to persons and communities in the conflict. The event is not merely a potential hazard or complementary information but an ongoing incident involving AI systems causing harm.
Thumbnail Image

Shield AI Launches V-BAT Drone Training in Ukraine - Oj

2025-01-17
odessa-journal.com
Why's our monitor labelling this an incident or hazard?
The V-BAT drone is an AI system as it performs autonomous or semi-autonomous operations including navigation and targeting under challenging conditions such as GPS denial and electronic warfare. The event details the training and deployment of this AI system in Ukraine's conflict, where it has been used to target enemy missile systems. This use directly leads to harm (injury or death in warfare) and disruption in a critical infrastructure context (military systems). Hence, this qualifies as an AI Incident due to the direct involvement of an AI system in causing harm through its military application.
Thumbnail Image

U.S. V-BAT drone producer opens Kyiv office, deploys experts

2025-01-20
Bulgarian Military Industry Review
Why's our monitor labelling this an incident or hazard?
The MQ-35A V-BAT drone is an AI system with autonomous capabilities used in active military operations, including reconnaissance under GPS jamming conditions. Its deployment in Ukraine's defense efforts directly contributes to harm in the context of armed conflict, including potential injury or death and disruption of military operations. The article details actual use and operational deployment, not just potential risks, thus constituting an AI Incident rather than a hazard. The involvement of AI in autonomous navigation, mission planning, and intelligence gathering confirms AI system involvement. The harm is realized through the drone's role in military conflict, meeting the criteria for injury or harm to persons and disruption of critical infrastructure (military operations).