AI-Enabled Autonomous Kamikaze Drones Demonstrated in Turkey

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Baykar showcased its new AI-powered kamikaze drones, K2 and Sivrisinek, in Keşan, Turkey. The demonstration highlighted autonomous swarm navigation, target detection, and attack capabilities. These AI-enabled weapon systems, set to debut at SAHA 2026, pose potential risks of harm if deployed in conflict scenarios.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems integrated into drones with autonomous navigation and attack capabilities. Although no harm has occurred during the demonstration, the use of AI in armed drones with automatic target detection and attack functions plausibly could lead to serious harms such as injury or violations of rights in future military operations. The event is about the development and use of AI systems with offensive military applications, which is a credible source of future AI-related harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Beş ayrı türdeki 18 İHA ile harp oyunu

2026-04-25
Milliyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into drones with autonomous navigation and attack capabilities. Although no harm has occurred during the demonstration, the use of AI in armed drones with automatic target detection and attack functions plausibly could lead to serious harms such as injury or violations of rights in future military operations. The event is about the development and use of AI systems with offensive military applications, which is a credible source of future AI-related harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Baykar'dan yeni nesil sürü gücü: K2 Kamikaze İHA ve 'Sivrisinek' sahada - ensonhaber.com

2026-04-24
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely AI-supported autonomous drones and munitions with swarm intelligence and autonomous attack capabilities. However, the article only reports on demonstrations and capabilities without any mention of harm, injury, rights violations, or operational failures causing damage. The presence of AI-enabled autonomous weapons with lethal potential suggests plausible future harm, but since no harm has occurred or is reported, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with military applications.
Thumbnail Image

Baykar'ın yeni nesil platformları K2 kamikaze İHA ve Sivrisinek dolanan mühimmat sahaya çıktı

2026-04-24
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous kamikaze drones and loitering munitions with AI-enabled swarm autonomy, navigation, and attack functions. Although no harm has yet occurred, the nature of these AI systems as lethal autonomous weapons means they could plausibly lead to injury or death and other harms if deployed in conflict. The article focuses on demonstration and capabilities, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar'ın yeni platformları K2 Kamikaze İHA ve dolanan mühimmat Sivrisinek sahaya çıktı

2026-04-24
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-supported autonomous kamikaze drones with lethal capabilities. Although no harm has yet occurred, the nature of these AI systems—autonomous lethal weapons—means they could plausibly lead to significant harm, including injury or death and violations of human rights. The article focuses on demonstration and capabilities, not on actual incidents or harm, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the main subject is the demonstration of AI-enabled lethal autonomous weapons, which inherently carry plausible future harm. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Baykar'ın gözdeleri saha'ya çıkıyor

2026-04-25
Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into autonomous kamikaze drones and loitering munitions with capabilities such as AI-based visual navigation, autonomous target detection, and swarm communication. These systems are designed for lethal military use, which inherently involves potential harm to persons and communities. No actual harm or incident is reported, but the deployment and demonstration of such AI-enabled autonomous weapons platforms represent a credible risk of future harm. Hence, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar'ın yeni mühimmatı SAHA 2026'da sahneye çıkıyor

2026-04-24
Sabah
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm behavior, AI-based navigation, and automatic attack capabilities, which are military AI systems with high potential for misuse and harm. Although no harm has yet occurred, the nature of these AI systems and their intended use as kamikaze drones and loitering munitions clearly imply a credible risk of future harm, including injury and violations of rights. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI systems and their capabilities are central to the event.
Thumbnail Image

Kamikaze K2 ve Sivrisinek geliyor | Yerel Gündem Haberleri

2026-04-25
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous navigation, target detection, and attack functions, which are typical AI capabilities. The article reports on their initial flights and demonstrations but does not mention any actual harm or incidents caused by these systems yet. However, the nature of these AI-enabled kamikaze drones inherently carries a plausible risk of causing injury or harm if used in conflict, qualifying this as an AI Hazard rather than an AI Incident. There is no indication of mitigation or governance response focus, so it is not Complementary Information.
Thumbnail Image

Baykar'ın yeni platformları K2 Kamikaze İHA ve dolanan mühimmat Sivrisinek sahaya çıktı | Savunma Sanayi Haberleri

2026-04-24
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The described platforms are AI systems because they use AI for autonomous swarm behavior, navigation, target detection, and attack. Their deployment and demonstration represent the development and use of AI-enabled autonomous weapons. Such systems have a high potential for causing harm, including injury or death, and disruption of critical infrastructure or security. Even though no harm is reported as having occurred yet, the nature of these AI-enabled autonomous weapons and their demonstrated capabilities plausibly lead to significant harm if used in conflict or misused. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by these AI-enabled lethal autonomous systems.
Thumbnail Image

Baykar'dan yeni nesil sürü otonomisi: K2 Kamikaze İHA ve 'Sivrisinek' sahada görüntülendi | VİDEO İZLE

2026-04-24
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous UAVs with swarm capabilities) used in a military context with lethal potential. While the article reports a demonstration without direct harm, the autonomous swarm weapon systems' development and use inherently carry credible risks of causing injury or death, qualifying this as an AI Hazard. There is no indication of realized harm or incident in this report, so it is not an AI Incident. The focus is on the demonstration of autonomous AI capabilities with potential for harm, not on responses or governance, so it is not Complementary Information.
Thumbnail Image

BAYKAR'ın yeni̇ nesi̇l platformlari K2 Kami̇kaze İHA ve Si̇vri̇si̇nek dolanan mühi̇mmat sahaya çıktı

2026-04-24
CNN Türk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as enabling autonomous navigation, target detection, and attack in kamikaze drones, fulfilling the definition of AI systems. The article focuses on the demonstration of these AI capabilities without reporting any realized harm or incidents. However, the autonomous lethal nature of these systems and their operational deployment potential imply a credible risk of injury, violation of rights, and harm to communities in the future. The mere development and demonstration of such AI-enabled autonomous weapons constitute an AI Hazard under the framework, as they could plausibly lead to AI Incidents involving significant harm. There is no indication of remediation, governance response, or societal reaction that would classify this as Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Baykar'dan güzel haber! K2 Kamikaze İHA ve dolanan mühimmat Sivrisinek artık sahada

2026-04-24
İnternethaber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm behavior, target detection, and attack capabilities, which are AI-powered lethal autonomous weapons. Although no harm has yet occurred, the development and demonstration of such systems plausibly could lead to AI incidents involving injury, death, or other harms. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Yeni nesil kamikaze İHA ve Sivrisinek görücüye çıkıyor

2026-04-24
İnternethaber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into autonomous kamikaze drones and loitering munitions with lethal capabilities. While no actual harm or incident is reported, the autonomous nature and lethal purpose of these AI systems imply a credible risk of future harm, including injury or death, disruption, and human rights violations. The event is a demonstration and announcement of these AI-enabled weapons, which fits the definition of an AI Hazard as it plausibly could lead to AI Incidents in the future. There is no indication of realized harm or incident at this stage, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the AI system's capabilities and potential impact.
Thumbnail Image

Baykar yeni silahını dünyaya ilan etti: SİVRİSİNEK

2026-04-24
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The described drones employ AI systems for autonomous navigation and target engagement, which are AI systems by definition. The event involves the use and demonstration of AI-enabled autonomous weapons capable of lethal attacks. Such systems have a high potential for causing harm, including injury or death, and thus represent a significant AI Hazard due to the plausible risk of harm from their deployment and use. Since the article describes a demonstration without reporting any actual harm or incident, it qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Gökte "Turan" taktiği! Baykar'ın K2 ve Sivrisinek'i sahaya çıktı

2026-04-24
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-supported autonomous drones and munitions with lethal capabilities. Although no harm has yet occurred, the nature of these AI systems—autonomous kamikaze drones and loitering munitions with AI-based target detection and attack—means they could plausibly lead to serious harms such as injury, death, or violations of human rights in future military operations. The article focuses on demonstration and capability showcasing, not on actual incidents or harm, so it does not qualify as an AI Incident. It is not merely complementary information because the main focus is on the demonstration of AI-enabled lethal autonomous systems, which inherently carry plausible future harm. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Baykar'dan gökyüzünde sürü operasyonu: K2 ve Sivrisinek'ten tam isabet! | Takvim TV

2026-04-24
takvim.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm operation, target detection, and attack capabilities, which are military AI systems. Although no harm has yet occurred, the nature of these systems and their intended use in combat imply a credible risk of causing injury, violations of rights, or other harms in the future. The article focuses on demonstration and development, not on an incident causing harm, so it does not qualify as an AI Incident. It is not merely complementary information because the main focus is on the demonstration of AI-enabled lethal autonomous systems, which inherently carry plausible future harm. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Baykar'ın yeni platformları K2 Kamikaze İHA ve dolanan mühimmat Sivrisinek sahaya çıktı

2026-04-24
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm and attack capabilities, which are military AI systems with lethal potential. Although no harm has yet occurred or been reported, the development and demonstration of such autonomous weapon systems plausibly could lead to AI incidents involving injury, death, or other harms in the future. The article focuses on the demonstration and export success of these systems, not on any actual incident or harm. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to AI incidents due to the nature of the AI systems and their intended use in combat.
Thumbnail Image

'Turan' taktiği gökyüzünde! 1000 km menzilli ölümcül sürü geliyor

2026-04-24
A Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported swarm autonomy and autonomous target detection and attack capabilities in kamikaze drones, which are AI systems. While no harm has yet occurred, the deployment of such AI-enabled lethal autonomous weapons systems poses a credible risk of causing injury or harm to people and communities in the future. Therefore, this event qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the demonstration of the AI system's capabilities, not on responses or updates to prior incidents. It is not Unrelated because the AI system's development and use are central to the event and its potential harms.
Thumbnail Image

Gökyüzünde ''Turan'' taktiği: Bin kilometrelik menzilli Sivrisinek sahneye çıktı

2026-04-24
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm behavior, AI-based navigation, and automatic attack capabilities integrated into kamikaze drones and loitering munitions. While the article reports a demonstration without actual combat use or realized harm, the nature of these AI systems—autonomous lethal weapons—means they could plausibly lead to injury, death, or broader harm in future military operations. The article does not describe any incident of harm yet, so it is not an AI Incident. Instead, it highlights the development and demonstration of AI-enabled military technology with high potential for misuse and harm, fitting the definition of an AI Hazard.
Thumbnail Image

Gökyüzünde 'Turan' taktiği: Baykar yeni kamikaze İHA K2 ve Sivrisinek'i tanıttı

2026-04-24
Türkiye
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous swarm behavior, AI-based navigation, target detection, and attack functions in kamikaze drones and loitering munitions. These systems are designed for lethal military use, which inherently carries a credible risk of causing injury, death, or other harms if deployed in conflict or misused. The article reports on demonstrations and capabilities but does not describe any actual harm or incident resulting from their use. Hence, it does not meet the criteria for an AI Incident. However, the development and demonstration of such autonomous lethal AI systems clearly pose a plausible risk of future harm, fitting the definition of an AI Hazard. The article is not merely general AI news or complementary information about responses or governance, so it is not Complementary Information or Unrelated.
Thumbnail Image

K2 kamikaze İHA ve Sivrisinek dolanan mühimmat sahaya çıktı

2026-04-24
Elbistanın Sesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into autonomous kamikaze drones and loitering munitions capable of automatic target detection and attack. Although the article reports a demonstration without any actual harm occurring, the deployment of AI-enabled autonomous weapons systems is widely recognized as a significant AI hazard because such systems could plausibly lead to injury, loss of life, or violations of human rights in future use. The event does not describe an incident where harm has already occurred, so it is not an AI Incident. It is not merely complementary information because the focus is on the demonstration and capabilities of potentially harmful AI systems rather than responses or governance. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

K2 ve Sivrisinek geliyor: Türk savunma sanayiinde bir ilk!

2026-04-24
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous navigation, target detection, and attack capabilities in a military context. Although no harm has yet occurred, the nature of these AI-enabled kamikaze drones and their demonstrated autonomous lethal functions imply a credible risk of causing injury, harm to communities, or property damage in future use. This fits the definition of an AI Hazard, as the development and demonstration of such systems could plausibly lead to AI Incidents involving harm. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the demonstration of AI-enabled autonomous weapon systems with potential for harm.
Thumbnail Image

Baykar'dan yeni nesil İHA 'sivrisinek' görücüye çıkıyor

2026-04-24
Aydinses
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous kamikaze UAVs with swarm capabilities) whose development and deployment in military contexts inherently pose plausible risks of harm, including injury or harm to persons and disruption of critical infrastructure. Although no harm has yet occurred, the announcement and demonstration of such autonomous weapon systems constitute an AI Hazard because they could plausibly lead to AI Incidents involving physical harm or other serious consequences. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information since it highlights the unveiling and autonomous capabilities of potentially hazardous AI systems, nor is it unrelated.
Thumbnail Image

Türk savunma sanayiinde bir ilk! Sivrisinek ve K2 sahneye çıktı! Yüksek hızlı dalış, otonom karar

2026-04-24
TV100
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having autonomous decision-making for target detection and attack in kamikaze drones and loitering munitions. While no specific harm or incident is reported as having occurred during the demonstration, the nature of these AI-enabled autonomous weapons inherently carries a credible risk of causing injury or death if used in conflict. The article highlights their operational capabilities and export agreements but does not report any realized harm or incident. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future harm from the development, use, and proliferation of AI-enabled lethal autonomous weapons.
Thumbnail Image

Gökyüzünde Turan taktiği: Devrim niteliğinde sürüler geliyor

2026-04-24
Yeniçağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as fully autonomous kamikaze drones with AI-enabled swarm coordination and navigation. Although no harm has yet occurred, the nature of these AI systems—autonomous lethal weapons capable of coordinated attacks—poses a credible risk of causing injury, disruption, or other harms in future use. The article focuses on the demonstration and capabilities, not on any realized harm, so it is not an AI Incident. It is not merely complementary information because the main subject is the demonstration of AI-enabled autonomous weapons with clear potential for harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Gökyüzünde ''Turan'' taktiği! Baykar'dan devrim niteliğinde sürü - Hür Haber

2026-04-25
hurhaber.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: autonomous kamikaze drones and loitering munitions with AI-enabled swarm autonomy, visual navigation, and automatic target detection and attack. The article does not describe any realized harm or incident but demonstrates capabilities that could plausibly lead to harm if deployed in conflict, such as injury or death and violations of human rights. The autonomous lethal nature of these AI systems and their operational use in military contexts inherently pose credible risks. Since no actual harm or incident is reported, and the focus is on demonstration and export achievements, the classification is AI Hazard due to the plausible future harm from these AI-enabled autonomous weapons.
Thumbnail Image

Türkiye's Baykar unveils next-generation homegrown AI-powered drones

2026-04-24
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems due to their autonomous navigation, target detection, and coordinated swarm behavior powered by AI. The event involves the development and use of these AI systems. Although no harm has yet occurred, the autonomous combat drones' capabilities plausibly pose significant risks of harm (injury, death, violations of rights) if deployed in conflict or misused. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future. It is not an AI Incident since no harm has been reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

Turkish powerhouse Baykar unveils next-generation AI-powered drones

2026-04-24
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as enabling autonomous navigation, target detection, and strike capabilities in military drones. While no specific harm or incident is reported as having occurred during the demonstration, the nature of these AI systems—autonomous lethal drones capable of coordinated swarm attacks—poses a credible risk of causing injury, disruption, or other harms if deployed in conflict. The article focuses on the demonstration and export of these systems, indicating potential future use and associated risks. Since no actual harm has yet occurred, but plausible future harm is evident, the classification is AI Hazard.
Thumbnail Image

Türkiye's Baykar unveils next-generation homegrown drones - Türkiye News

2026-04-24
Hurriyet Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems enabling autonomous navigation, target detection, and coordinated strikes by combat drones, which are lethal autonomous weapons. While no incident of harm is reported, the nature of these AI systems and their intended use in combat imply a credible risk of causing injury, harm to communities, or violations of rights in the future. The event is about the development and demonstration of these AI-powered autonomous weapons, which fits the definition of an AI Hazard as it could plausibly lead to an AI Incident. There is no indication of actual harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system's development and use with potential for harm is central to the report.
Thumbnail Image

Türkiye's Baykar rolls out next-generation homegrown AI-powered drones

2026-04-24
TRT World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems enabling autonomous navigation, target detection, and coordinated strikes by military drones. The use of AI in autonomous weapons systems with lethal capabilities inherently carries a plausible risk of causing injury, death, or other serious harms. Since the event is a demonstration of such AI-powered autonomous combat drones prior to their public debut, it represents a credible potential for future harm rather than a realized incident. Therefore, this qualifies as an AI Hazard under the framework, as the AI system's development and intended use could plausibly lead to an AI Incident involving significant harm.
Thumbnail Image

Turkey's Baykar tests K2 swarm kamikaze drones for first time: What to know

2026-04-24
Al-Monitor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported swarm autonomy, automatic target detection, and strike capabilities in kamikaze drones, which are AI systems by definition. The event involves the development and use of these AI systems in military applications with lethal potential. While no actual harm is reported yet, the plausible future harm from autonomous lethal drones is significant and credible. Hence, the event is best classified as an AI Hazard rather than an AI Incident, as harm has not yet materialized but could plausibly occur.
Thumbnail Image

Baykar unveils AI-powered K2 and Sivrisinek swarm drones

2026-04-24
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems with autonomous navigation, target detection, and strike capabilities, which clearly fit the definition of AI systems. Their use in military operations and autonomous lethal strikes inherently carry risks of injury or harm to persons and potential violations of human rights. Since the article reports on their unveiling and demonstration without describing any actual harm or incident, but given the credible risk of future harm from their deployment and proliferation, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the demonstration of new AI-enabled lethal capabilities with plausible future harm, not on responses or ecosystem context. It is not unrelated because the AI system and its potential harms are central to the report.
Thumbnail Image

Turkey's Baykar tests K2 swarm kamikaze drones for the first time: what you need to know - ExBulletin

2026-04-24
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported swarm autonomy, automatic target detection, and strike capabilities in kamikaze drones, which are AI systems by definition. The event involves the development and testing of these systems, not their malfunction or misuse at this stage. No direct harm is reported, but the potential for lethal harm and disruption due to autonomous weapon systems is clear and credible. Therefore, this event fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving injury or harm to persons and disruption of security.