Ukraine Plans to Mass-Produce AI-Enabled Kamikaze Drones

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ukraine's Minister of Digital Transformation, Mykhailo Fedorov, announced plans to scale up production of AI-enabled kamikaze drones capable of autonomous targeting and engagement, contingent on increased Western funding. The expansion of these autonomous lethal systems raises concerns about potential future harm and AI-related hazards in conflict zones.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-enabled kamikaze drones capable of autonomous real-time decision-making and target engagement, which qualifies as AI systems. The discussion centers on scaling up production of these drones, implying increased deployment and potential use in conflict. Although no direct harm is reported in the article, the nature of these AI systems—autonomous lethal drones—presents a credible risk of causing injury, violation of human rights, and harm to communities. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the development and use of these AI-powered autonomous weapons.[AI generated]
AI principles
SafetyAccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Україна може виготовляти 2 млн дронів на рік, якщо буде достатньо грошей ‒ Федоров

2024-03-20
ZN.UA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled kamikaze drones capable of autonomous real-time decision-making and target engagement, which qualifies as AI systems. The discussion centers on scaling up production of these drones, implying increased deployment and potential use in conflict. Although no direct harm is reported in the article, the nature of these AI systems—autonomous lethal drones—presents a credible risk of causing injury, violation of human rights, and harm to communities. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the development and use of these AI-powered autonomous weapons.
Thumbnail Image

Федоров: перші прототипи дронів із ШІ можуть з'явитися на полі бою вже до кінця року | УНН

2024-04-02
unn.ua
Why's our monitor labelling this an incident or hazard?
The event describes the development and testing of AI-enabled drones designed for military use, specifically for target detection and coordinated attacks. Although no harm has yet occurred, the deployment of such AI systems on the battlefield could plausibly lead to significant harm, including injury or death, disruption, and other serious consequences. Therefore, this situation constitutes an AI Hazard due to the credible risk of future harm from the use of AI in autonomous weapon systems.
Thumbnail Image

Федоров розповів, коли на полі бою з'являться дрони зі штучним інтелектом

2024-04-02
Gazeta.ua
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as drones with AI capabilities for autonomous targeting and swarm coordination. The use of such AI-enabled weaponry in warfare inherently carries a credible risk of causing injury, death, and disruption, which fits the definition of an AI Hazard. Since the article does not report any actual harm or incidents caused by these drones yet but focuses on their upcoming deployment and testing, it does not qualify as an AI Incident. Therefore, this is best classified as an AI Hazard due to the plausible future harm from the AI system's use in armed conflict.
Thumbnail Image

Прототипи дронів з ШІ можуть з'явитися на полі бою до кінця року - Федоров

2024-04-01
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into drones for autonomous target acquisition and swarm behavior, which are currently in testing and expected to be deployed on the battlefield soon. Although no harm has yet been reported from these AI systems, their intended use in active conflict zones presents a credible risk of causing injury, property damage, and escalation of hostilities. This fits the definition of an AI Hazard, as the development and near-future deployment of these AI-enabled drones could plausibly lead to an AI Incident involving harm. There is no indication that harm has already occurred from these AI systems, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential for harm from AI systems in military drones.
Thumbnail Image

"Знаходиться на стадії тестування": Федоров сказав, коли ЗСУ зможуть застосувати дрони з ШІ на полі бою

2024-04-01
WAR OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into drones capable of target acquisition and swarm coordination, which are AI functionalities. The drones are currently in testing but expected to be used on the battlefield soon. Although no incident of harm is reported yet, the use of AI-enabled weaponized drones in active conflict zones plausibly could lead to injury, death, or other harms. This fits the definition of an AI Hazard, as the development and intended use of these AI systems could plausibly lead to an AI Incident involving harm. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential deployment and associated risks of AI systems in military drones.
Thumbnail Image

Федоров сподівається, що прототипи дронів зі штучним інтелектом з'являться на полі бою до кінця року

2024-04-02
БізнесЦензор
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-powered drones capable of autonomous target acquisition and swarm behavior. However, it only describes the development and testing phase, with no indication that these AI systems have yet been used in combat or caused any direct or indirect harm. The potential for harm exists given the military application, but since no harm has occurred or been reported, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the plausible future deployment and associated risks of AI drones in warfare.