AI-Enabled Kamikaze Drones Showcased by Turkish Firm in Mali

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkish defense company SkyDagger unveiled its new AI-enabled FPV kamikaze drones at the BAMEX'25 Expo in Mali. These autonomous drones, equipped with smart munitions, are designed for military use and export, raising credible risks of future harm due to their AI-driven targeting and operational capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems integrated into the drones' smart munitions and autonomous operational capabilities. The drones are armed with lethal payloads and designed for combat, including urban settings, which inherently carry risks of injury or death. Although no actual harm or incident is described, the production, export, and deployment of these AI-enabled lethal autonomous weapons constitute a credible risk of future harm. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event and its potential consequences.[AI generated]
AI principles
SafetyRespect of human rightsDemocracy & human autonomyAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Skydagger ilk kez sahnede! 14 ülkeye 26 bin tane satıldı

2025-11-14
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the drones' smart munitions and autonomous operational capabilities. The drones are armed with lethal payloads and designed for combat, including urban settings, which inherently carry risks of injury or death. Although no actual harm or incident is described, the production, export, and deployment of these AI-enabled lethal autonomous weapons constitute a credible risk of future harm. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event and its potential consequences.
Thumbnail Image

Türk Savunma Sanayisi FPV Kamikaze Dronlarıyla Dikkat Çekiyor

2025-11-14
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through autonomous drones with smart munitions and AI components. While no actual harm or incident is reported, the nature of the product—armed autonomous drones capable of causing injury or death—means their development and export plausibly could lead to AI Incidents involving harm. The article focuses on the production, capabilities, and export of these AI-enabled weapons, which aligns with the definition of an AI Hazard as it could plausibly lead to injury or harm. There is no indication of a realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential risks and capabilities of these AI systems, not on responses or updates to past incidents. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Milli SİHA'lardan sonra "oyun değiştirme" sırası FPV kamikaze dronlarda

2025-11-14
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (FPV kamikaze drones with AI-enabled autonomous and smart features) and their development and deployment. However, the article does not describe any direct or indirect harm caused by these AI systems, nor does it report any incident or accident. The potential for harm exists given the military nature of the drones, but the article does not present a specific event where harm occurred or was narrowly avoided. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk posed by the proliferation of AI-enabled kamikaze drones, but not an AI Incident since no harm has materialized yet.
Thumbnail Image

Dünyaca ünlü Türk dron üreticisi yeni platformunu ilk kez tanıttı

2025-11-14
TRT haber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as these kamikaze drones rely on AI for navigation, targeting, and operational autonomy. The article details their use and deployment in military contexts, which directly leads to harm (injury or death) in conflict zones, fulfilling the harm criteria (a) injury or harm to persons. The description of the drones' advanced safety and operational features confirms AI system involvement in their use. Since these systems are actively used and have caused or will cause harm, this is an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or governance responses but reports on actual deployment and capabilities leading to harm.
Thumbnail Image

Mini savaşçı ilk kez görüntüledi! SİHA'lara kardeş geliyor

2025-11-14
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (FPV kamikaze drones with autonomous and AI components) whose use in military operations could plausibly lead to harm such as injury or death, disruption, or other significant harms. However, the article does not describe any realized harm or incident resulting from their use, only their development, deployment, and capabilities. Therefore, this qualifies as an AI Hazard, reflecting the credible risk posed by these AI-enabled weapon systems rather than an AI Incident or complementary information.
Thumbnail Image

Muharebe sahasında oyun değiştirici olacak

2025-11-14
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the kamikaze drones, including smart munitions and autonomous production. The drones are designed for military combat use, which inherently carries risks of injury, death, and destruction. However, no actual harm or incident is described in the article; it focuses on product development, export, and capabilities. Given the potential for these AI-enabled autonomous weapons to cause significant harm in the future, this event fits the definition of an AI Hazard—a credible risk of harm plausibly arising from the use of AI systems in these drones. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.