Baykar Unveils AI-Enabled Autonomous Loitering Munition 'Mızrak'

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkish defense company Baykar has unveiled the Mızrak, an AI-supported autonomous loitering munition with a range exceeding 1,000 km and significant lethal capabilities. Debuting at SAHA 2026, the system's autonomous targeting and operational flexibility raise concerns about future risks of harm from AI-enabled military weapons.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as having AI-supported autonomous capabilities in a military weapon system. Although no incident of harm is reported, the nature of the system as an autonomous lethal munition with advanced AI features implies a credible risk of causing injury or harm in future use. The development and public unveiling of such a system fit the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities in conflict. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential impact.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Baykar'dan savunmada yeni hamle: 'Mızrak' ilk kez SAHA 2026'da - ensonhaber.com

2026-04-30
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-supported autonomous capabilities in a military weapon system. Although no incident of harm is reported, the nature of the system as an autonomous lethal munition with advanced AI features implies a credible risk of causing injury or harm in future use. The development and public unveiling of such a system fit the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities in conflict. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential impact.
Thumbnail Image

Baykar'dan Mızrak Akıllı Dolanan Mühimmat Sistemi

2026-04-30
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported autonomous capabilities in a lethal loitering munition system, which is an AI system by definition. The system's intended use in military operations with explosive warheads poses a credible risk of harm to people and communities, fulfilling the criteria for an AI Hazard. There is no indication that the system has caused any harm yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the system's development and potential impact rather than updates or responses to past incidents. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Baykar'dan Mızrak Akıllı Dolanan Mühimmat Sistemi

2026-04-30
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system with autonomous lethal capabilities, which directly relates to the potential for harm through its use in military operations. The AI system's autonomous targeting and operational features mean it can directly cause injury or harm to persons or groups, fulfilling the criteria for an AI Incident. Although the article does not describe a specific harm event, the system's intended use and capabilities imply direct involvement in causing harm. Therefore, this qualifies as an AI Incident due to the AI system's role in lethal autonomous weaponry.
Thumbnail Image

MIZRAK Akıllı Dolanan Mühimmat, SAHA 2026'da ilk kez vitrine çıkacak

2026-04-30
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported autonomous capabilities in a weapon system designed for long-range, autonomous strike and surveillance missions. Although no actual harm or incident is reported, the nature of the system as an autonomous lethal weapon with AI-enabled target acquisition and operation in contested environments implies a credible risk of causing injury, death, or other harms in the future. The development and public showcasing of such a system fits the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities. There is no indication of a realized incident or harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the AI system's development and potential use are central and pose plausible future harm.
Thumbnail Image

Baykar'dan Mızrak Akıllı Dolanan Mühimmat Sistemi: SAHA 2026'da ilk kez vitrine çıkıyor | NTV Haber

2026-04-30
NTV
Why's our monitor labelling this an incident or hazard?
The event involves the development and unveiling of an AI-enabled autonomous weapon system with lethal capabilities. Such systems have a high potential for causing harm, including injury or death, and disruption in conflict zones. The AI system's autonomous operation and targeting functions directly relate to potential harm. Although the article does not report an actual incident of harm, the deployment and proliferation of such AI-enabled autonomous weapons plausibly pose significant risks of harm in the future. Therefore, this event qualifies as an AI Hazard under the framework, as it plausibly could lead to AI Incidents involving injury or harm to persons or groups.
Thumbnail Image

Yapay zeka destekli yeni mühimmat: Mızrak SAHA 2026'da

2026-04-30
Memurlar.Net
Why's our monitor labelling this an incident or hazard?
The Mızrak system is an AI system as it uses AI-supported autopilot and guidance for autonomous operation. Its use as a loitering munition with autonomous target detection and engagement capabilities means it can directly cause harm (physical injury or death) through its deployment. The article presents the system as operational and ready for use, implying realized harm potential. Therefore, this qualifies as an AI Incident because the AI system's use in autonomous lethal weaponry directly leads to potential injury or harm to persons, fulfilling the criteria for an AI Incident under harm category (a).
Thumbnail Image

Avını buluyor, anında vuruyor: Baykar'ın yeni nesil avcısı ilk kez ortaya çıktı | Gündem Haberleri

2026-04-30
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The described system is an AI system due to its AI-supported autonomous navigation and targeting capabilities. It is intended for military use with lethal effects, which inherently carries a plausible risk of causing harm (injury or death) and disruption. Since the event is about the system's development and unveiling without any reported harm yet, it constitutes an AI Hazard rather than an Incident. The potential for significant harm from autonomous weapons systems is well recognized, making this a credible future risk.
Thumbnail Image

MIZRAK Akıllı Dolanan Mühimmat, SAHA 2026'da ilk kez vitrine çıkacak

2026-04-30
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The MIZRAK system explicitly incorporates AI for autonomous navigation and targeting in a military weapon context. Its autonomous lethal capabilities mean it can directly cause injury or death, fulfilling the criteria for potential harm. Although no specific incident of harm is reported yet, the event highlights the system's capabilities and its first public display, indicating the potential for future harm. According to the OECD framework, the mere development and offering for sale or display of AI-enabled autonomous weapons with high potential for misuse or harm qualifies as an AI Hazard. Since no actual harm has been reported yet, this is not an AI Incident but an AI Hazard.
Thumbnail Image

Yapay zeka destekli yeni mühimmat: Mızrak SAHA 2026'da

2026-04-30
TRT haber
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system integrated into a lethal autonomous munition with capabilities for autonomous operation, target detection, and engagement. Although no actual harm or incident is reported, the development and deployment of such AI-enabled autonomous weapons inherently carry a credible risk of causing injury, death, or broader harm to communities and environments. The event is about the system's development and upcoming public display, not about an incident or harm that has already occurred. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm in the future.
Thumbnail Image

MIZRAK sahneye çıktı, hedefi tam isabetle vurdu

2026-04-30
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The MIZRAK UAV is an AI-supported kamikaze drone with autonomous targeting and operational capabilities. While no harm or incident is reported, the development and deployment of such AI-enabled autonomous weapons systems inherently carry a credible risk of causing harm in the future, including injury, violation of rights, or harm to communities. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident, given the nature of the system and its intended use in military operations.
Thumbnail Image

Düşmana 'Mızrak' gibi inecek! Menzili 1000 km'yi aşıyor, 7 saat havada kalabiliyor

2026-04-30
Türkiye
Why's our monitor labelling this an incident or hazard?
The MIZRAK system is explicitly described as AI-supported with autonomous navigation and targeting capabilities, qualifying it as an AI system. Its development and intended use as a lethal autonomous weapon system directly relate to potential harm to people and communities (harm categories a and d). Although no specific incident of harm is reported, the article presents the system's capabilities and deployment readiness, which plausibly could lead to AI incidents involving injury or death and disruption in conflict zones. Therefore, this event is best classified as an AI Hazard, reflecting the credible risk posed by the development and potential use of this AI-enabled autonomous weapon system.
Thumbnail Image

Baykar Mızrak tanıtıldı: 1000 km menzilli akıllı mühimmat!

2026-04-30
Teknolojioku
Why's our monitor labelling this an incident or hazard?
The Mızrak system is an AI-enabled autonomous weapon system designed for military use, capable of independently identifying and engaging targets. Its development and deployment represent a clear AI Hazard because such autonomous weapons have a high potential to cause harm, including injury or death, disruption of security, and escalation of conflicts. Although the article does not report any actual harm or incidents caused by the system yet, the nature of the AI system and its intended use plausibly could lead to AI Incidents involving injury, loss of life, or other serious harms. Therefore, this event is best classified as an AI Hazard.
Thumbnail Image

Baykar'dan Mızrak Akıllı Dolanan Mühimmat Sistemi

2026-04-30
Elbistanın Sesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-supported autonomous operation and guidance capabilities. The system is a military loitering munition with lethal capabilities, which by its nature can cause injury or death. While no specific harm has yet been reported, the development and potential use of such AI-enabled autonomous weapons plausibly could lead to AI incidents involving harm to persons or communities. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm from the AI system's deployment and use in military operations.
Thumbnail Image

Baykar'dan oyun değiştirecek hamle! 'Mızrak' ilk kez görücüye çıkıyor

2026-04-30
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-supported autonomous capabilities used in a military weapon system. The use of AI in autonomous lethal weapons systems directly relates to potential harm, including injury or death, and disruption in critical infrastructure or conflict zones. Although the article does not report an actual incident of harm, the deployment and exhibition of such AI-enabled autonomous weapons constitute a plausible risk of significant harm in the future. Therefore, this event qualifies as an AI Hazard due to the credible potential for harm arising from the AI system's use in autonomous lethal operations.
Thumbnail Image

Baykar'dan Savunma Devrimi! Mızrak, 1000 KM Uzaklıktan Hedefi Vuruyor!

2026-04-30
Bolu Olay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into a lethal autonomous munition capable of independent target engagement. Although no actual harm or incident is described, the development and deployment of such AI-enabled autonomous weapons inherently carry a credible risk of causing harm, meeting the definition of an AI Hazard. Since no realized harm or incident is reported, it cannot be classified as an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential military impact, which plausibly could lead to harm.
Thumbnail Image

Menzili 1000 kilometreyi aşıyor! 'Mızrak Akıllı Dolanan Mühimmat' Saha 2026'da vitrinde olacak

2026-04-30
Mynet Haber
Why's our monitor labelling this an incident or hazard?
The Mızrak system is an AI-enabled autonomous weapon system with lethal capabilities, which directly relates to the development and use of AI in military applications. Such systems inherently carry significant risks of harm, including injury or death to persons, and potential violations of human rights and international law. The article highlights the system's autonomous operational capabilities and lethal payload, indicating a credible risk of harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving harm, even though no specific harm has yet been reported.
Thumbnail Image

Baykar'dan MIZRAK Akıllı Dolanan Mühimmat! İlk kez SAHA 2026'da vitrine çıkacak

2026-04-30
Günes
Why's our monitor labelling this an incident or hazard?
The MIZRAK system is an AI-enabled autonomous weapon system with capabilities that could directly lead to injury or harm to persons in combat scenarios. The article focuses on the system's features and capabilities but does not describe any actual harm or incident resulting from its use. Therefore, it represents a plausible future risk of harm due to the nature of the AI system's intended use in autonomous lethal operations. According to the definitions, the development and deployment of AI-enabled autonomous weapons with lethal capabilities constitute an AI Hazard because they could plausibly lead to AI Incidents involving injury or harm.