Turkey Unveils AI-Enabled K2 Kamikaze Drone with Autonomous Swarm Capabilities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkish defense company Baykar has unveiled the K2 Kamikaze UAV, an autonomous drone equipped with advanced AI and swarm algorithms for coordinated military operations. Successfully tested in formation flights, the K2 can carry heavy payloads and operate over long distances, raising concerns about future risks from AI-enabled lethal autonomous weapons.[AI generated]

Why's our monitor labelling this an incident or hazard?

The K2 Kamikaze UAV is explicitly described as AI-supported with autonomous swarm and targeting capabilities, indicating the presence of an AI system. The event focuses on its development and potential military application, which inherently carries risks of harm to people and communities if used in conflict. Although no incident or harm is reported yet, the nature of the system and its intended use imply a credible risk of future harm, qualifying this as an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)

Severity
AI hazard

Business function:
Research and development

AI system task:
Reasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

K2 Kamikaze İHA Haberleri - Son Dakika K2 Kamikaze İHA Haber Güncel Gelişmeler

2026-03-15
Milliyet
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as AI-supported with autonomous swarm and targeting capabilities, indicating the presence of an AI system. The event focuses on its development and potential military application, which inherently carries risks of harm to people and communities if used in conflict. Although no incident or harm is reported yet, the nature of the system and its intended use imply a credible risk of future harm, qualifying this as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Cumhurbaşkanlığı Savunma Sanayii Başkanı Haluk Görgün'den "K2 Kamikaze İHA" paylaşımı Açıklaması

2026-03-14
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system: the K2 Kamikaze UAV with AI-supported autonomous swarm and image-based navigation and targeting. The system is designed for military use with lethal capabilities. Although no incident or harm is reported, the autonomous lethal drone's development and deployment inherently pose credible risks of injury, disruption, or violations of rights. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident in the future. The article is not merely a product launch without risk; the nature of the system and its capabilities imply potential for significant harm. Hence, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Türkiye'den İran'ın Shahed-136'sını piyasadan silecek İHA: İşte BAYKAR K2'nin benzersiz özelliği

2026-03-14
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the K2 Kamikaze UAV) with autonomous capabilities, confirming AI system involvement. However, it focuses on the system's features and potential military utility without describing any realized harm or incident. There is no indication of harm to persons, infrastructure, rights, property, communities, or environment, nor any warning of plausible future harm. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides informative context about AI development in defense, fitting the definition of Complementary Information.
Thumbnail Image

SSB Başkanı Görgün: K2 Kamikaze İHA sahada önemli bir güç çarpanı oluşturuyor

2026-03-14
TRT haber
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI-enabled autonomous weapon system. While the article does not report any specific incident of harm or malfunction, the deployment and operational use of such AI-powered autonomous weapons inherently carry significant risks of harm to persons, property, and communities. The article emphasizes the system's capabilities and operational advantages but does not mention any realized harm or incident. Therefore, this event represents a plausible risk scenario where the AI system's use could lead to harm, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the system's capabilities and potential battlefield impact, not on responses or updates to past incidents.
Thumbnail Image

K2 Kamikaze İHA formasyon uçuş testlerini başarıyla tamamladı

2026-03-14
Hürriyet
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system as it uses AI for autonomous navigation, formation flying, and targeting. The article focuses on its successful testing and capabilities but does not mention any realized harm or incidents. However, given its military application as an autonomous kamikaze drone capable of delivering lethal payloads, the AI system's development and deployment plausibly could lead to injury, harm to people, or damage to property and communities. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential for harm are central to the article.
Thumbnail Image

K2 Kamikaze sınıfı geçti

2026-03-15
Hürriyet
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system as it uses AI for autonomous swarm behavior and decision-making. The event concerns the development and testing of this AI-enabled military drone system. Although the system has high destructive potential and is a military weapon, the article only reports successful tests without any harm or misuse occurring. Hence, it does not qualify as an AI Incident. However, given the nature of the system as an autonomous weapon with kamikaze capabilities, its development and deployment plausibly pose future risks of harm, qualifying it as an AI Hazard. There is no indication that the article focuses on responses, governance, or complementary information, nor is it unrelated to AI.
Thumbnail Image

Gökyüzünde yeni kurt kapanı

2026-03-15
Milliyet
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as an AI system with autonomous swarm capabilities used for military strike operations. Although no actual harm or incident is reported, the system's intended use as an autonomous kamikaze weapon inherently carries a credible risk of causing injury, death, or broader harm in conflict scenarios. The article focuses on the development and testing phase, indicating plausible future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar'ın sınıfının en büyük kamikaze İHA'sı K2 testlerden geçti - ensonhaber.com

2026-03-14
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI and autonomy for military kamikaze drone operations. Although no harm has yet occurred, the nature of the system—a lethal autonomous weapon capable of precision strikes—poses a credible risk of causing injury, violation of rights, or harm to communities if deployed in conflict. The article focuses on successful tests and capabilities, not on any incident or harm realized. Hence, it fits the definition of an AI Hazard, as the development and potential use of this AI system could plausibly lead to significant harm in the future.
Thumbnail Image

Savaş sürerken Türkiye'den yeni "Kamikaze İHA K2" hamlesi! Bayraktar "sürpriz" deyip duyurdu

2026-03-14
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and autonomy in the K2 Kamikaze UAV, confirming the presence of an AI system. The event concerns the development and testing of a lethal autonomous weapon system with swarm capabilities, which could plausibly lead to injury, death, and destruction in conflict zones. No actual harm or incident is reported yet, so it does not qualify as an AI Incident. The focus is on the introduction and capabilities of the AI-enabled weapon platform, which poses a credible future risk of harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Baykar K2 Kamikaze İHA'dan başarılı uçuş testleri - Teknoloji Haberleri

2026-03-14
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous kamikaze UAV using AI for navigation, formation flying, and targeting. Although no harm has yet occurred, the system's intended use as a lethal autonomous weapon with precision strike capabilities presents a credible risk of causing injury or death in the future. The article focuses on successful tests and capabilities rather than any incident or harm, so it is not an AI Incident. The plausible future harm from the deployment and use of such AI-enabled autonomous weapons classifies this as an AI Hazard under the OECD framework.
Thumbnail Image

BAYKAR K2 Kamikaze İHA, formasyon uçuş testlerini başarıyla tamamladı

2026-03-14
Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported swarm synergy controlling kamikaze UAVs, confirming the involvement of AI systems. There is no indication of any realized harm or malfunction causing injury, rights violations, or other harms. However, the nature of kamikaze drones with autonomous AI capabilities inherently carries a credible risk of future harm, including injury or disruption, if deployed in conflict or misused. Thus, the event is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's development and use potential.
Thumbnail Image

Baykar K2 Kamikaze İHA test uçuşu yaptı

2026-03-15
Sabah
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system due to its autonomous and AI capabilities. Although no harm has occurred during the test flights, the nature of the system as an autonomous kamikaze drone implies a credible risk of causing harm in the future. The article focuses on the development and demonstration of this AI-enabled weapon system, which could plausibly lead to AI incidents involving injury, human rights violations, or harm to communities. Hence, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar'dan dünyayı şaşırtan test: K2 Kamikaze İHA sürü uçuşunda | VİDEO İZLE

2026-03-14
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and autonomy algorithms controlling the UAV swarm, confirming AI system involvement. The event is about testing and demonstration, with no realized harm reported, so it is not an AI Incident. However, given the nature of kamikaze drones with autonomous swarm capabilities, there is a plausible risk of future harm, such as injury or disruption, if these systems are deployed in conflict or misused. Hence, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

K2 Kamikaze İHA sahaya çıktı | Video

2026-03-14
Sabah
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as an AI-enabled autonomous weapon system with capabilities for autonomous navigation, target identification, and precision strikes. Although no actual harm or incident is reported, the system's design and deployment inherently carry a credible risk of causing injury, death, or destruction if used in military operations. The article focuses on the system's capabilities and export success but does not mention any misuse or harm that has occurred. According to the OECD framework, the mere development and deployment of AI-powered autonomous weapons with lethal capabilities constitute an AI Hazard because they could plausibly lead to AI Incidents involving harm to people and property. Hence, the classification is AI Hazard.
Thumbnail Image

Yapay zekalı 'Turan' taktiği: 5'li K2 filosu gökyüzünde gövde gösterisi yaptı!

2026-03-14
Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the K2 kamikaze drones, including autonomy and swarm intelligence. The drones are designed for military strike missions with high destructive capability, which inherently carries a credible risk of causing harm (injury, property damage, disruption) if used in conflict. Although the article does not report any actual incident or harm caused by these drones, the development and testing of such AI-enabled lethal autonomous weapons platforms constitute a plausible future risk of harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its military application are central to the article.
Thumbnail Image

K2 Kamikaze İHA sahada | NTV Haber

2026-03-14
NTV
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system due to its autonomous capabilities and AI algorithms. However, the article only reports successful tests without any harm or malfunction. While the development of autonomous weapon systems carries potential risks, this article does not describe any realized or imminent harm, only the demonstration of capabilities. Therefore, it qualifies as an AI Hazard because the development and deployment of such AI-enabled autonomous weapons could plausibly lead to harm in the future, but no incident has occurred yet.
Thumbnail Image

Bayraktar, 2000km menzilli K2 Kamikaze İHA'yı tanıttı

2026-03-14
Memurlar.Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for autonomous flight, target detection, and engagement in a kamikaze drone designed for military use. Although no harm has yet occurred, the nature of the system and its intended use clearly imply a credible risk of significant harm in the future. The development and announcement of such an AI-enabled autonomous weapon system fit the definition of an AI Hazard, as it could plausibly lead to injury, violations of rights, or harm to property and communities. There is no indication of an actual incident or realized harm, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system and its potential risks are central to the report.
Thumbnail Image

Baykar'dan ölmeyen kamikaze: K2 Kamikaze İHA gökyüzünde | Savunma Sanayi Haberleri

2026-03-15
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the K2 Kamikaze UAV) with advanced AI and autonomous capabilities. The article focuses on its development and successful testing, with no mention of any harm or incident caused by the system so far. Since kamikaze drones are weaponized autonomous systems, their existence and potential use pose credible risks of harm in the future. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI-enabled autonomous weapon system, but not an AI Incident as no harm has yet occurred.
Thumbnail Image

K2 Kamikaze İnsansız Hava Aracı, formasyon uçuş testlerini başarıyla tamamladı | Savunma Sanayi Haberleri

2026-03-14
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for autonomous formation flight of kamikaze drones, which are weaponized UAVs. While no harm has occurred during the tests, the AI system's development and use in lethal autonomous drones could plausibly lead to harms such as injury, disruption, or violations of rights if deployed in conflict or misused. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI is clearly involved, and it is not Complementary Information as the main focus is on the test event and AI capabilities with potential harm, not on responses or governance.
Thumbnail Image

Baykar, sınıfının en büyük kamikaze İHA'sı K2'yi duyurdu

2026-03-14
İnternethaber
Why's our monitor labelling this an incident or hazard?
The K2 UAV is explicitly described as an AI-enabled autonomous kamikaze drone with lethal capabilities. Although no harm or incident is reported, the nature of the system as an autonomous weapon with AI-driven targeting and navigation means it could plausibly lead to injury, death, or other harms if deployed in conflict. The article focuses on the development and testing phase, highlighting the AI system's capabilities and strategic military use, which aligns with the definition of an AI Hazard—an event where AI system development or use could plausibly lead to harm. There is no indication of realized harm or incident, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system and its potential risks are central to the report.
Thumbnail Image

Selçuk Bayraktar "sürpriz" deyip duyurdu! Savaş sürerken Türkiye'den çok konuşulacak "kamikaze İHA" hamlesi

2026-03-14
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI-enabled system due to its autonomous navigation and operational capabilities. The article does not report any actual harm caused by the Turkish system yet, but highlights the ongoing use of similar AI-enabled kamikaze drones by Iran that have caused significant harm. The Turkish development and announcement of such a system during an active conflict plausibly could lead to harm including injury, death, and damage to property and communities. Therefore, this event represents a credible future risk of harm from the use of AI-enabled autonomous weapons, qualifying it as an AI Hazard rather than an Incident since no harm from the Turkish system is reported yet.
Thumbnail Image

Gök Vatan'da bir gurur daha! "Dünyada çok şeyi değiştirecek"

2026-03-15
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for autonomous swarm behavior, navigation, and targeting. There is no indication that harm has yet occurred or that the system malfunctioned. The article focuses on successful tests and the system's capabilities, not on any incident causing injury, rights violations, or other harms. However, the development and deployment of an AI-enabled kamikaze drone with lethal capabilities inherently carry plausible risks of future harm, including injury or disruption in military contexts. According to the definitions, the mere development and testing of such an AI system with high potential for misuse or harm qualifies as an AI Hazard. Hence, the classification is AI Hazard.
Thumbnail Image

Baykar'ın K2'si sahneye çıktı! Sınıfının en büyüğüne testlerde tam not

2026-03-14
Ak�am
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system with autonomous swarm and navigation capabilities, clearly involving AI in its development and use. The article does not describe any realized harm or incidents resulting from its deployment but emphasizes its military application and potential for lethal use. The presence of AI-enabled autonomous lethal drones inherently carries a credible risk of harm, including injury, property damage, or violations of human rights, even if no harm has yet occurred. Thus, the event is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's deployment in military contexts.
Thumbnail Image

K2 Kamikaze İHA sürprizi: Formasyon uçuş testleri başarıyla tamamladı

2026-03-14
takvim.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI for autonomous formation flying, navigation, and target engagement. Although no harm has yet occurred, the system's intended military use as a kamikaze drone capable of autonomous lethal action plausibly could lead to injury, harm to people, or damage to property and communities. The article focuses on successful tests and future development plans, indicating credible future risks rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Sınıfının en büyüğü: 2000 kilometre menzilli K2 Kamikaze İHA sahnede

2026-03-14
TRT haber
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as an AI system with advanced autonomy and swarm intelligence. It is a military weapon system capable of autonomous lethal strikes. Although the article only reports successful tests and no actual harm, the nature of the system and its intended use clearly imply a plausible risk of causing injury, death, or destruction in the future. The development and testing of such AI-enabled autonomous weapons platforms are recognized as AI Hazards because they could plausibly lead to AI Incidents involving harm to people and property. Since no harm has yet occurred, this is not an AI Incident but an AI Hazard.
Thumbnail Image

Baykar'ın K2 Kamikaze İHA'sı sahneye çıktı! Sınıfının en büyüğü

2026-03-14
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as an AI-enabled autonomous weapon system with swarm intelligence and precision strike capabilities. Although no actual harm or incident is reported, the system's nature as a kamikaze drone capable of autonomous lethal action inherently carries a credible risk of causing injury, violation of human rights, or harm to communities if deployed or misused. The article focuses on the development, testing, and export success of this AI system, without describing any realized harm or incident. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm in the future.
Thumbnail Image

K2 Kamikaze İHA sahnede: Akıllı sürü otonomisiyle formasyon uçuşunu başarıyla tamamladı - Dünya Gazetesi

2026-03-14
Dünya
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous kamikaze drone with swarm intelligence and advanced AI algorithms. Although no harm has yet occurred, the system's intended use as a lethal autonomous weapon capable of coordinated attacks implies a credible risk of injury, harm to property, and disruption of critical infrastructure. The article focuses on the development and testing phase, with no indication of actual incidents or misuse. Hence, it does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard because the AI system's development and intended use could plausibly lead to significant harm in the future.
Thumbnail Image

Baykar'ın K2 kamikaze İHA'sı ortaya çıktı! İlk testleri geçti: 2000 km menzil, 13 saat havada

2026-03-14
Mynet Finans
Why's our monitor labelling this an incident or hazard?
The K2 UAV is explicitly described as an AI system with autonomy and advanced algorithms for lethal kamikaze missions. Although no harm has yet occurred, the nature of the system and its intended use in military operations with high destructive power imply a credible risk of future harm (injury, death, destruction). The article focuses on the system's capabilities and successful tests, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

Baykar, sınıfının en büyük İHA'sını duyurdu! Menzili 2 bin km'yi aşıyor, 13 saat havada durabiliyor

2026-03-14
Türkiye
Why's our monitor labelling this an incident or hazard?
The K2 drone is explicitly described as an AI system with autonomy algorithms used for military kamikaze missions, which inherently involve potential harm to people and property. Although no incident of harm is reported, the development and testing of such a system plausibly could lead to AI incidents involving injury or death. This fits the definition of an AI Hazard, as the event involves the use and development of an AI system that could plausibly lead to significant harm in the future. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential military use with inherent risks.
Thumbnail Image

Yerli üretim kamikaze İHA modeli duyuruldu

2026-03-14
Merhaba Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and autonomy in the K2 Kamikaze UAV, confirming the presence of an AI system. The event concerns the development and testing (use phase) of this AI system. No actual harm or incident is reported; the article focuses on capabilities and export success. However, kamikaze UAVs with AI autonomy inherently carry credible risks of causing injury, disruption, or violations if deployed in conflict or misused. Thus, the event plausibly leads to AI-related harm in the future, fitting the definition of an AI Hazard. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

Yeni Alanya Gazetesi - Alanya Haber, Son Dakika Alanya Haberleri

2026-03-14
Yeni Alanya Gazetesi
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is described as having autonomous swarm intelligence, which qualifies as an AI system. Its capability to carry substantial explosives and operate autonomously in a swarm context implies potential for significant harm. Since the article only mentions successful testing and does not report any actual harm or incidents, the event represents a plausible future risk rather than a realized harm. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar, Sınıfının En Büyüğü Yeni Kamikaze İHA "K2"yi Tanıttı

2026-03-14
S A N A
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having advanced autonomy and AI flight algorithms in a kamikaze drone designed for lethal military use. Although no incident of harm has occurred yet, the system's intended use as an autonomous weapon with swarm capabilities and long-range strike potential clearly presents a plausible risk of causing injury, death, or other significant harms. The development and deployment of such AI-enabled lethal autonomous weapons systems are recognized as AI Hazards because they could plausibly lead to AI Incidents involving physical harm or violations of human rights. Since no actual harm is reported, the classification is AI Hazard rather than AI Incident.
Thumbnail Image

Düşman Korkutan Baykar K2 Kamikaze İHA İncelemesi: Özellikleri, Menzili ve Sürü Yeteneği Neler?

2026-03-14
Bolu Olay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI and autonomy in the K2 Kamikaze UAV, confirming the presence of an AI system. The UAV is designed for autonomous lethal operations, which inherently carry risks of harm to persons and communities. Although the article does not describe any actual harm or incidents caused by the AI system, the development and testing of such an AI-enabled kamikaze drone plausibly could lead to AI Incidents in the future. According to the definitions, the mere development and deployment of AI-powered autonomous weapons with lethal capabilities constitute an AI Hazard due to the credible risk of harm. Since no realized harm is reported, this is not an AI Incident. The article is not primarily about responses, governance, or updates to prior incidents, so it is not Complementary Information. It is also not unrelated as it clearly involves an AI system with potential for harm.
Thumbnail Image

K2 Kamikaze İHA sahaya çıktı

2026-03-14
Mynet Haber
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system with autonomous decision-making and targeting capabilities. Although no harm or incident is reported, the system's intended use as an autonomous weapon capable of striking targets without human intervention presents a credible risk of causing injury, death, or property damage. The article focuses on the successful testing and capabilities of the AI system but does not describe any realized harm or misuse. Hence, it fits the definition of an AI Hazard, as the development and deployment of such AI-enabled lethal autonomous weapons could plausibly lead to AI Incidents involving harm to people and communities.
Thumbnail Image

K2 Kamikaze İHA Testleri Başarıyla Sonuçlandı

2026-03-14
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using advanced AI and autonomy for coordinated swarm operations in kamikaze drones. Although no harm has yet occurred, the AI system's development and intended use in lethal autonomous weapons plausibly could lead to significant harm, including injury or death and other serious consequences. According to the definitions, the mere development and testing of AI-enabled autonomous weapons platforms with lethal capabilities constitute an AI Hazard because they could plausibly lead to AI Incidents. Since no actual harm or incident is reported, this is not an AI Incident but an AI Hazard.
Thumbnail Image

Selçuk Bayraktar'dan dengeleri bozacak paylaşım! 'Sürpriz' deyip sahur vakti yeni İHA, K2 Kamikaze'yi duyurdu!

2026-03-14
Haber 7
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as employing advanced AI and autonomy for military drone swarm operations with kamikaze attack capabilities. While no harm has yet occurred, the system's design and intended use in offensive military operations inherently carry a credible risk of causing injury, property damage, or broader harm if used in conflict. The article focuses on the unveiling and testing of this AI-enabled weapon platform, which plausibly could lead to AI Incidents in the future. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

Baykar'dan K2 Kamikaze İHA Sürprizi

2026-03-14
Trabzonspor Haberleri
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI and autonomy algorithms in the K2 Kamikaze UAV, confirming the presence of an AI system. There is no indication that the system has caused any injury, damage, or rights violations yet, so it is not an AI Incident. However, the development and testing of an AI-enabled kamikaze drone capable of autonomous coordinated attacks and electronic warfare navigation clearly pose a plausible future risk of harm, meeting the criteria for an AI Hazard. The article does not focus on responses, mitigation, or broader ecosystem context, so it is not Complementary Information. It is directly related to an AI system with potential for harm, so it is not Unrelated.
Thumbnail Image

800 kilo ağırlık, 2000 kilometre menzil: Sınıfının en büyüğü K2 tanıtıldı

2026-03-14
ekonomist.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the K2 UAV with AI and autonomy) but only describes its development, testing, and operational capabilities without any reported harm or misuse. There is no mention or implication of injury, rights violations, property or community harm, or disruption caused by the AI system. The article is primarily an announcement and overview of a new AI-enabled defense product and its export success, which fits the definition of Complementary Information as it provides context and updates on AI system development and deployment without describing an incident or hazard.
Thumbnail Image

Türkiye'nin dev kamikaze İHA'sı K2 tanıtıldı: Tam 200 kiloluk bomba taşıyabiliyor

2026-03-14
Medyafaresi
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system as it uses AI for autonomous navigation, formation control, and target engagement. The article does not report any actual harm or incident caused by the system yet, but the system's lethal capabilities and autonomous operation mean it could plausibly lead to injury, death, or destruction if deployed in conflict or misused. The development and introduction of such AI-enabled autonomous weapons platforms are recognized as AI Hazards because they carry credible risks of causing harm in the future. Since no realized harm is described, this is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on the AI system's capabilities and potential impact.
Thumbnail Image

Türk savunma sanayiinde bir adım daha: K2 Kamikaze İHA göreve hazır

2026-03-14
ekonomist.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI and autonomy algorithms in a kamikaze drone system, which is a weaponized platform capable of autonomous operation. Although no harm or incident is reported, the nature of the system and its intended use as a kamikaze drone imply a credible risk of future harm, including injury or violation of human rights. According to the OECD framework, the mere development and testing of AI-enabled autonomous weapons with high potential for misuse constitute an AI Hazard. Since no actual harm has occurred yet, this event is not an AI Incident but an AI Hazard.
Thumbnail Image

Türkiye'nin yeni Kamikaze İHA'sı testleri başarıyla geçti

2026-03-14
Aydınlık
Why's our monitor labelling this an incident or hazard?
The K2 kamikaze UAV is an AI system as it uses AI for autonomous navigation, formation flying, target identification, and precision strikes. The article focuses on its successful testing and capabilities, including lethal payload delivery. While no actual harm is reported, the system's intended use as a kamikaze drone capable of autonomous lethal attacks implies a credible risk of causing injury, death, or destruction. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm. There is no indication of an incident (harm realized) or complementary information (such as policy responses or mitigation). Hence, the classification is AI Hazard.
Thumbnail Image

Gökyüzünün yeni pençesi: 200 kiloluk ölümcül darbe

2026-03-14
Yeniçağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV uses AI-based navigation and visual recognition to identify and target enemy positions, which implies autonomous decision-making capabilities. The description indicates its use in military operations to deliver lethal strikes, which inherently carries the risk of harm to persons and property. Although the article does not report a specific incident of harm, the deployment and use of such AI-enabled lethal autonomous weapons systems pose a credible risk of causing injury or death, as well as other harms. Therefore, this event represents an AI Hazard due to the plausible future harm from the use of this AI system in lethal military operations.
Thumbnail Image

K2 Kamikaze İHA, menziliyle dosta güven, düşmana korku verdi - ensonhaber.com

2026-03-15
En Son Haber
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is explicitly described as an AI-enabled autonomous weapon system with capabilities such as AI-supported swarm flight and autonomous targeting. Although no actual harm or incident is reported, the system's design and intended use as an armed autonomous drone inherently carry a credible risk of causing harm in the future. According to the OECD framework, the development and deployment of AI-enabled autonomous weapons with lethal capabilities constitute an AI Hazard because they could plausibly lead to injury, disruption, or other significant harms. Since no realized harm is described, it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential military impact.
Thumbnail Image

İsrail'in uykularını kaçıracak Türk teknolojisi: Tel Aviv'e kadar gidebiliyor

2026-03-15
Haberler
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system due to its AI-supported swarm flight and autonomous navigation capabilities. Its development and deployment as a weaponized drone with autonomous targeting and operational features pose a plausible risk of harm, including injury or harm to persons and disruption of critical infrastructure, given its military application and long range. Although the article does not report any actual harm or incidents caused by the system yet, the nature of the system and its intended use clearly indicate a credible potential for significant harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as no realized harm is described but plausible future harm is inherent in the system's capabilities and deployment.
Thumbnail Image

K2 Kamikaze İHA Tanıtıldı 2 Bin Km Menzil Dikkat Çekti

2026-03-15
Trabzonspor Haberleri
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system due to its AI-supported swarm flight technology and autonomous operational capabilities. Although the article does not report any realized harm or incident, the nature of the system as an autonomous weapon platform with long-range strike ability and advanced targeting means it could plausibly lead to injury, harm, or disruption if used in conflict or misused. Therefore, this event fits the definition of an AI Hazard, as it describes the development and introduction of an AI-enabled system that could plausibly lead to significant harm in the future.
Thumbnail Image

Baykar'ın K2 kamikaze İHA'sı sahneye çıktı: Yapay zekâ destekli sürü otonomisiyle sınıfının en büyüğü - Hür Haber

2026-03-15
hurhaber.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as integrated into kamikaze UAVs with autonomous swarm and targeting capabilities. Although no harm or misuse is reported, the nature of the system—AI-enabled autonomous lethal drones—poses a credible risk of causing injury, property damage, or human rights violations in future use. The article focuses on the development and testing phase, with no indication of actual harm, so it does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm in the future.
Thumbnail Image

Türkiye'nin 2.000 kilometre menzilli İHA'ları görücüye çıktı! Sahada dengeleri değiştirecek - Sözcü Gazetesi

2026-03-16
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze drone is explicitly described as AI-enabled with swarm autonomy and advanced navigation algorithms, qualifying it as an AI system. The event concerns its development and deployment, not an incident of realized harm. However, the nature of the system as an autonomous lethal weapon with swarm capabilities and long range implies a credible risk of causing injury, harm to communities, or disruption in warfare contexts. Since no actual harm is reported yet, but plausible future harm is evident, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar'ın K2'si yeni bir paradigma olabilir mi?

2026-03-16
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The K2 drone is an AI system with autonomous capabilities that could plausibly lead to significant harms such as injury, disruption, or violations of rights if used in conflict. The article emphasizes its advanced AI features and strategic military role, indicating a credible risk of future harm. However, no actual incident or harm is described as having occurred. The discussion about media regulation and digital platform asymmetry is unrelated to AI harms or incidents. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the AI system's deployment and capabilities.
Thumbnail Image

Yunan basını Baykar'ı gündeme taşıdı: Türkiye Avrupa savunmasında etkisini artırıyor - ensonhaber.com

2026-03-17
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as having advanced autonomous and AI-driven capabilities (e.g., AI-supported swarm autonomy, autonomous navigation, and targeting). These systems are actively used in military conflicts (e.g., Ukraine war), where their deployment has direct consequences on human safety and security, fulfilling the criteria for harm to communities and injury. The article reports on realized use and impact rather than hypothetical risks, so it is an AI Incident rather than a hazard or complementary information. The geopolitical and security concerns further underscore the significance of the harm caused by these AI systems.
Thumbnail Image

Türkiye Reveals Baykar K2 Loitering Munition for Long-Range Swarm Strike Missions

2026-03-14
Army Recognition
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-assisted functions integrated into the K2 UAV for autonomous navigation, target identification, and coordinated swarm operations. These features qualify the system as an AI system. The system is a weaponized autonomous platform capable of lethal strikes, which inherently carries risks of injury or harm to people and damage to property and communities. Although no harm has yet occurred or been reported, the demonstrated autonomous swarm strike capability and long-range operational profile indicate a plausible risk of future harm. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impact.
Thumbnail Image

Turkey's Baykar tests swarm behavior of its K2 one-way attack drone

2026-03-16
Defense News
Why's our monitor labelling this an incident or hazard?
The K2 drone system clearly involves AI systems for autonomous swarm behavior and navigation. The event concerns the use and development of this AI system in a military context with lethal capabilities. Although no harm has been reported or indicated as having occurred, the nature of the system and its intended use plausibly could lead to significant harm, including injury or death and disruption. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future if deployed in conflict.
Thumbnail Image

Türkiye debuts K2 Kamikaze OWA UAV

2026-03-16
Janes.com
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI-enabled autonomous weapon system with offensive capabilities, demonstrated through swarming flight trials using AI for navigation, formation, and targeting. The development and testing of such an AI-powered autonomous weapon system with lethal payloads inherently pose plausible risks of harm, including injury or death, violations of human rights, and disruption in conflict scenarios. Although no actual harm is reported in the article, the nature of the system and its intended use clearly indicate a credible potential for future harm. Therefore, this event qualifies as an AI Hazard due to the plausible risk of AI-enabled autonomous lethal operations leading to harm.
Thumbnail Image

Baykar's K2 Kamikaze UAV completes autonomous swarm flight tests

2026-03-16
Air Force Technology
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for autonomous swarm flight, navigation, target location, and precision strikes in a kamikaze UAV designed for military use. Although no harm has yet occurred or been reported, the nature of the system—an autonomous weapon capable of coordinated attacks—poses a credible risk of injury, disruption, or other harms if deployed in conflict. The event focuses on the development and testing of this AI-enabled weapon system, which aligns with the definition of an AI Hazard due to plausible future harm. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system and its potential for harm are central to the report.
Thumbnail Image

While the war continues, a new 'Kamikaze UAV K2' move from Turkey! Bayraktar announced it, calling it a 'surprise.'

2026-03-14
Haberler.com
Why's our monitor labelling this an incident or hazard?
The K2 Kamikaze UAV is an AI system as it uses artificial intelligence for autonomous navigation, formation flying, and target engagement. Its deployment in military conflicts and export to many countries implies a high potential for causing harm, including injury or death, damage to property, and disruption of critical infrastructure. The article describes realized use of similar kamikaze drones in attacks causing significant damage, and the K2 represents an advancement in this class of weaponry. Therefore, the development, use, and proliferation of the K2 UAV directly or indirectly lead to harms consistent with AI Incidents. The article reports actual deployment and damage caused by similar drones, and the K2's capabilities suggest continuation or escalation of such harms. Hence, this event qualifies as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Turkish technology that will keep Israel awake: It can reach as far as Tel Aviv.

2026-03-15
Haberler.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-supported autonomous UAV with swarm capabilities and advanced targeting, which qualifies as an AI system. The article does not report any realized harm or incident but highlights the UAV's military capabilities and potential operational reach, implying plausible future harm if deployed in conflict. The development and introduction of such AI-enabled autonomous weapons with lethal capabilities constitute an AI Hazard because they could plausibly lead to injury, disruption, or other harms. There is no indication of an actual incident or complementary information about responses or mitigation, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Baykar Unveils K2 Long-Range Kamikaze Drone with AI-Driven Swarm Capability - Quwa

2026-03-15
Quwa - Pakistan Defence News Coverage & Analysis
Why's our monitor labelling this an incident or hazard?
The K2 drone is an AI system as it uses AI for autonomous navigation, targeting, and swarm coordination. The event concerns the development and unveiling of a weaponized AI system capable of autonomous lethal action. While no harm has yet occurred, the AI system's intended use as a kamikaze drone with autonomous strike capability plausibly leads to significant harm, including injury or death and disruption of security. This fits the definition of an AI Hazard, as the event describes a circumstance where AI system development and use could plausibly lead to an AI Incident. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the unveiling of a potentially harmful AI system.
Thumbnail Image

New kamikaze drone can strike targets 1,234 miles away with a 441-pound warhead

2026-03-16
Yahoo Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system integrated into a lethal autonomous weapon (the K2 drone) with capabilities for autonomous swarm operation, target identification, and GPS-independent navigation. The drone's design for deep strikes with a large warhead and autonomous operation without direct human control presents a credible risk of causing significant harm, including injury, death, and destruction of critical infrastructure. Although no actual harm or incident is reported, the mere development and unveiling of such a system with these capabilities constitutes a plausible future risk of AI-related harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baykar aces AI-enabled drone swarm trials with five new 'K2 ...

2026-03-17
Flight Global
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and autonomy algorithms in the drone swarm, which qualifies as an AI system. The event is about successful trials and development of an armed autonomous drone swarm, which could plausibly lead to harms such as injury, violation of rights, or harm to communities if deployed in conflict. Since no harm has yet occurred, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the AI system's capabilities and potential implications, not on responses or governance. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Türkiye's Baykar introduces new long-range attack K2 drone

2026-03-14
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-based flight and targeting systems, including autonomous swarm flight and AI vision. Although no actual harm or incident is reported, the nature of the system—a long-range armed drone capable of autonomous attack—presents a credible risk of causing injury, harm to communities, or property damage in the future. Therefore, this qualifies as an AI Hazard due to the plausible future harm from the development and potential use of this AI-enabled weapon system.
Thumbnail Image

Baykar unveils new K2 kamikaze drone in video set to 'Waltz No. 2'

2026-03-14
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The K2 drone is an AI system as it uses AI for autonomous flight, navigation, targeting, and engagement. Its development and intended use as a kamikaze drone with autonomous capabilities pose a credible risk of harm, including injury or death, disruption, and harm to communities. Although no specific harm has yet occurred or been reported in this article, the nature of the system and its intended military use plausibly could lead to AI incidents involving injury or harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the deployment and use of this AI-enabled autonomous weapon system.
Thumbnail Image

Turkish Baykar develops heavy K2 kamikaze drone

2026-03-14
Defence Blog
Why's our monitor labelling this an incident or hazard?
The K2 drone is an AI system as it uses AI for autonomous navigation, target identification, and coordinated swarm attacks. Although no specific harm has yet occurred or been reported, the system's design and intended military use as a kamikaze drone capable of autonomous lethal strikes clearly pose a credible risk of causing injury, death, or destruction. This fits the definition of an AI Hazard, as the development and deployment of such AI-enabled autonomous weapons could plausibly lead to AI Incidents involving harm to persons and communities. The article focuses on the unveiling and capabilities of the system rather than any realized harm or incident, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Baykar unveils new AI-powered K2 kamikaze drone - Latest News

2026-03-14
Hurriyet Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into the K2 kamikaze drone for autonomous flight and targeting. Although no incident of harm is reported, the nature of the system—a lethal autonomous weapon—means it could plausibly lead to injury or harm to people and disruption of security. The development and unveiling of such a system is a credible risk factor for future AI incidents involving harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Türkiye unveils AI-powered kamikaze drone with 2,000 km range

2026-03-14
TRT World
Why's our monitor labelling this an incident or hazard?
The K2 UAV is explicitly described as using advanced AI for autonomous flight and targeting, qualifying it as an AI system. Its design as a kamikaze drone with lethal payload and autonomous swarm capabilities indicates a high potential for causing harm if used in conflict. Since the event is about the unveiling and not about an incident where harm has already occurred, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury, harm to communities, or property damage. The event does not describe an actual incident of harm, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system's development and intended use pose a credible future risk.
Thumbnail Image

Türkiye's Baykar unveils the new long-range K2 attack drone | News.az

2026-03-14
News.az
Why's our monitor labelling this an incident or hazard?
The K2 drone is explicitly described as using AI for autonomous navigation, targeting, and engagement, fulfilling the definition of an AI system. Its military application as an attack drone with autonomous swarm capabilities and lethal payloads inherently carries a plausible risk of causing harm (injury, death, or other harms) if deployed or misused. Since the article only reports the unveiling and capabilities without any actual harm or incident occurring, it does not meet the criteria for an AI Incident. It is not merely complementary information because the focus is on the new AI-enabled weapon system's potential impact. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the AI system's use in autonomous lethal operations.
Thumbnail Image

Baykar unveils new K2 kamikaze drone with AI, 2,000 km range

2026-03-14
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based flight and targeting systems in a kamikaze drone capable of autonomous swarm operations and lethal payload delivery. Although no incident of harm is reported, the nature of the system as an autonomous weapon with lethal capability inherently carries a credible risk of causing injury or death. The development and unveiling of such a system thus constitute an AI Hazard under the framework, as it could plausibly lead to AI Incidents involving harm to persons or communities. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the AI system's development and intended use directly relate to potential future harm.