Turkey Deploys AI-Enabled Akıncı Armed Drone to Military Service

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey has delivered the AI-enabled Akıncı Taarruzi İnsansız Hava Aracı (TİHA), an advanced armed drone developed by Baykar, to its armed forces. The drone's autonomous capabilities and military deployment raise concerns about potential future harm in conflict zones, highlighting the risks associated with AI-powered weapon systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI-enabled armed unmanned aerial vehicle system (Akıncı TİHA) with advanced AI features used by the Turkish military. While no harm or incident is reported, the nature of the system as an armed autonomous or semi-autonomous drone implies credible potential for harm (injury, property damage, or rights violations) in future operations. The article focuses on the system's capabilities and deployment, not on any incident or harm caused. Hence, it fits the definition of an AI Hazard, as the development and use of such AI systems could plausibly lead to AI Incidents in the future.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Akıncı TİHA açılımı nedir, özellikleri nelerdir? Akıncı TİHA'nın maliyeti, menzili ve hızı ne kadar?

2021-08-30
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI-enabled armed unmanned aerial vehicle system (Akıncı TİHA) with advanced AI features used by the Turkish military. While no harm or incident is reported, the nature of the system as an armed autonomous or semi-autonomous drone implies credible potential for harm (injury, property damage, or rights violations) in future operations. The article focuses on the system's capabilities and deployment, not on any incident or harm caused. Hence, it fits the definition of an AI Hazard, as the development and use of such AI systems could plausibly lead to AI Incidents in the future.
Thumbnail Image

Cumhurbaşkanı Erdoğan'dan önemli açıklamalar

2021-08-29
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The Baykar Akıncı is an advanced attack drone, which is an AI-enabled system due to its autonomous or semi-autonomous operational capabilities. The event involves the deployment of this AI system for military use. While no direct harm or incident is reported in the article, the introduction of such advanced autonomous weapon systems plausibly leads to potential future harms, such as injury, disruption, or violations related to armed conflict. Therefore, this event qualifies as an AI Hazard because the development and deployment of AI-powered attack drones could plausibly lead to AI Incidents in the future.
Thumbnail Image

Akıncı TİHA teslimat için gün sayıyor

2021-08-26
En Son Haber
Why's our monitor labelling this an incident or hazard?
The Bayraktar Akıncı TİHA is an AI system due to its autonomous flight control and autopilot systems. The article focuses on its delivery to security forces and its military capabilities, including carrying munitions. No actual harm or incident is reported, so it is not an AI Incident. However, the deployment of an armed autonomous drone system inherently carries credible risks of harm (injury, disruption, or rights violations) in the future. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, even if no harm has yet occurred.
Thumbnail Image

Cumhurbaşkanı Erdoğan, Bayraktar Akıncı TİHA'yı imzaladı

2021-08-29
En Son Haber
Why's our monitor labelling this an incident or hazard?
The Bayraktar Akıncı TİHA is an AI-enabled armed drone system entering active military use. Although no harm or incident is reported, the nature of the system as an autonomous weapon platform implies a plausible risk of causing harm in the future. The event is about the deployment and official acceptance of this AI system, not about any realized harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Akıncı TİHA nedir? Akıncı TİHA özellikleri neler? Akıncı TİHA menzili - Yeni Akit

2021-08-28
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in the Akıncı TİHA for decision-making and mission planning, confirming the presence of an AI system. The system is an armed drone capable of autonomous or semi-autonomous operations, which inherently carries risks of harm to people, property, or communities if misused or malfunctioning. However, the article does not report any actual incidents, malfunctions, or harms caused by the AI system. It mainly provides an overview of the system's capabilities and upcoming deployment. Given the plausible future harm from such AI-enabled weapon systems, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential risks are central to the article.
Thumbnail Image

'Türkiye, filosuna gelişmiş drone'lar ekleyecek'

2021-08-27
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions advanced armed drones (TİHA) developed with AI capabilities being added to the Turkish military fleet. Although no harm or incident is reported, the nature of these AI-enabled autonomous weapon systems implies a credible risk of future harm, such as injury or violations of human rights, if used in conflict. The event is about the delivery and deployment of these systems, not about an incident or harm that has already occurred. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AKINCI TİHA'dan ilk uçuş!

2021-08-29
Haber 7
Why's our monitor labelling this an incident or hazard?
The Bayraktar Akıncı TİHA is an armed unmanned aerial vehicle likely equipped with AI systems for autonomous operation. While the article does not report any harm or incident resulting from its use, the development and deployment of such AI-enabled autonomous weapon systems pose plausible risks of harm, including injury, violation of human rights, or harm to communities, if used improperly or malfunctioning. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm associated with the AI system's deployment in a military context.
Thumbnail Image

Akıncı TİHA göreve hazır

2021-08-29
Sabah
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an AI-enabled autonomous weapon system entering active service. Although no harm or incident is reported, the nature of the system as an armed drone with autonomous functions implies a credible risk of future harm, such as injury or violations of human rights. The article focuses on the system's readiness and capabilities, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the development and deployment of such systems could plausibly lead to AI Incidents in the future.
Thumbnail Image

Bayraktar AKINCI TİHA için beklenen gün geldi: TSK'nın kullanımına sunulacak

2021-08-28
Sabah
Why's our monitor labelling this an incident or hazard?
The Bayraktar Akıncı TİHA is an AI-enabled armed drone system entering active military service. Although no incident or harm is reported, the deployment of such a system with autonomous AI features for combat roles plausibly could lead to harms including injury or death, disruption, or violations of rights. The article focuses on the system's capabilities and induction, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future due to its nature and intended use.
Thumbnail Image

Selçuk Bayraktar'dan heyecanlandıran 'AKINCI' paylaşımı: 2 gün kaldı göreve başlamana

2021-08-28
takvim.com.tr
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an armed drone with autonomous capabilities, implying the use of AI systems for navigation, targeting, and operation. Although no harm or incident is reported, the deployment of such AI-enabled weapon systems inherently carries risks of harm to people and communities. Since the article focuses on the upcoming activation of this AI system without any realized harm, it fits the definition of an AI Hazard, reflecting a credible potential for harm in the future.
Thumbnail Image

Akıncı ne zaman teslim edilecek? Akıncı Tiha özellikleri, menzili, envantere gireceği tarih! | GAZETE VATAN

2021-08-28
Vatan
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an AI-enabled armed drone system with autonomous features, which qualifies as an AI system. The article announces its delivery to security forces but does not describe any actual harm or incident caused by it yet. However, armed AI drones inherently carry plausible risks of causing injury, disruption, or rights violations in conflict scenarios. Since no harm has yet occurred but the system's deployment could plausibly lead to such harms, this fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Akıncı TİHA gün sayıyor: 3 gün sonra teslim edilecek

2021-08-26
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an AI system as it is an autonomous armed drone with software and electronics developed for complex decision-making and operational autonomy. The article focuses on its upcoming delivery to security forces, indicating its intended use in military operations involving lethal force. Although no incident or harm has occurred yet, the nature of the system and its capabilities plausibly could lead to injury, loss of life, or other significant harms. Therefore, this event qualifies as an AI Hazard because it involves the development and deployment of an AI-enabled autonomous weapon system with a credible potential to cause harm in the future.
Thumbnail Image

Akıncı TİHA gün sayıyor: 3 gün sonra envantere girecek

2021-08-26
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an armed drone with autonomous capabilities, which qualifies it as an AI system. The article describes its upcoming deployment but does not report any actual harm or incident caused by it yet. However, the nature of armed AI drones inherently carries plausible risks of harm, such as injury or violations of human rights, if used in conflict or malfunctioning. Since the event concerns the delivery and potential future use of this AI system, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Erdoğan'ın önünde nasıl engellenmek istediklerini anlattı

2021-08-29
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (armed UAV) being developed and entering inventory, which is a technology with significant potential for harm. However, no actual harm or incident is reported in the article. Therefore, this constitutes an AI Hazard, as the development and deployment of such AI-enabled military drones could plausibly lead to incidents involving injury, disruption, or rights violations in the future.
Thumbnail Image

Akıncı TİHA göreve hazır

2021-08-29
Bursada Bugün
Why's our monitor labelling this an incident or hazard?
The Akıncı TİHA is an AI-enabled armed drone system entering active military service. Although no harm or incident is reported, the AI system's role in autonomous or semi-autonomous operations with lethal capabilities means it could plausibly lead to injury, disruption, or other harms. The article focuses on the system's deployment and capabilities, not on any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Başkan Erdoğan konuşuyor Gündem Haberi | GÜNEŞ

2021-08-29
Günes
Why's our monitor labelling this an incident or hazard?
The AKINCI TİHA is an AI system (an attack unmanned aerial vehicle with autonomous capabilities). The event concerns its delivery and operational deployment, which could plausibly lead to harm given its military use. Since no actual harm or incident is reported, but the system's use in conflict zones implies a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses, mitigation, or legal/governance actions, so it is not Complementary Information. It is not unrelated as it clearly involves an AI system with potential for harm.
Thumbnail Image

Akıncı SİHA envanterde - Yeni Akit

2021-09-17
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The Akıncı drones are AI-enabled military systems with autonomous or semi-autonomous capabilities, including sophisticated AI features for electronic intelligence and operational tasks. Their deployment in military operations involves the use of AI systems that can directly influence physical environments and potentially cause harm. However, the article does not describe any realized harm or incidents resulting from their use; it focuses on their delivery and capabilities. Given the potential for these AI-enabled drones to be used in combat and cause harm, but with no harm reported yet, this event constitutes an AI Hazard rather than an AI Incident.
Thumbnail Image

Ağır silahlı ve tehlikeli: Akıncı SİHA envanterde

2021-09-16
Memurlar.Net
Why's our monitor labelling this an incident or hazard?
The Akıncı UAV is an AI-enabled armed drone system whose deployment and use in military operations directly involve AI systems with autonomous and advanced target detection capabilities. The article describes how these systems have been used or are intended to be used in combat scenarios, which inherently involve harm to persons and communities. Given that the AI system's use in these drones directly contributes to lethal military actions, this constitutes an AI Incident under the framework, as the AI system's use has directly led to harm (injury or death) in conflict zones. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ağır silahlı ve tehlikeli: Akıncı SİHA envanterde

2021-09-16
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (Akıncı armed UAV) with advanced AI capabilities used in military operations. However, it does not describe any realized harm or incident resulting from its use. Instead, it analyzes the strategic significance and potential future impacts of the system. Therefore, the event represents a plausible future risk scenario where the AI system's deployment could lead to harm, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it involves AI-enabled military technology with potential for harm.
Thumbnail Image

Ağır ve silahlı tehlike! ''Bu Türkiye için kritik bir husus''

2021-09-16
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled armed drones (Akıncı and related systems) that are operational and have been used in combat, causing direct harm through targeted strikes and military engagements. The AI systems' autonomous or semi-autonomous capabilities in target detection, decision support, and weapon deployment are central to their function. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people and communities in conflict areas. The article does not merely discuss potential future risks or general AI developments but focuses on deployed AI systems causing real-world harm.