Turkey Deploys AI-Enabled Autonomous Strike Drones ALPAGU and KERKES System

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey has successfully tested and plans to deploy the ALPAGU armed drone, featuring AI-enabled autonomous navigation and targeting via the KERKES system, which operates without GPS. While no harm has occurred yet, the introduction of these AI-powered autonomous weapons poses credible risks of future harm or misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used in military UAVs with autonomous navigation and targeting enabled by AI. While no harm or incident has occurred yet, the deployment of armed AI-enabled drones with autonomous capabilities poses a plausible risk of harm, such as injury, violation of rights, or harm to communities, if misused or malfunctioning. Therefore, this qualifies as an AI Hazard due to the credible potential for future harm from the use of these AI systems in military contexts.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychologicalEconomic/PropertyReputational

Severity
AI hazard

Business function:
Research and developmentICT management and information securityMonitoring and quality controlCompliance and justice

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Dünyada çok az ülkede var! İHA ALPAGU testi başarıyla geçti

2022-07-01
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in military UAVs with autonomous navigation and targeting enabled by AI. While no harm or incident has occurred yet, the deployment of armed AI-enabled drones with autonomous capabilities poses a plausible risk of harm, such as injury, violation of rights, or harm to communities, if misused or malfunctioning. Therefore, this qualifies as an AI Hazard due to the credible potential for future harm from the use of these AI systems in military contexts.
Thumbnail Image

İHA ALPAGU testi başarıyla geçti

2022-07-01
Haber7
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI use in the KERKES system for autonomous navigation without GPS, which is a clear AI system. The ALPAGU UAV is an armed drone system that has passed tests and will enter service. While no harm or incident is reported, the deployment of armed UAVs with AI navigation capabilities poses a credible risk of harm (e.g., misuse, accidents, or unintended consequences). Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Vurucu İHA ALPAGU göreve hazır

2022-07-01
TRT haber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (KERKES uses AI for navigation without GPS) integrated into military UAVs capable of autonomous strike capabilities (ALPAGU). While these systems could plausibly lead to harms such as injury, violation of rights, or disruption due to autonomous weapon use, the article only reports successful tests and upcoming deployment without any realized harm or malfunction. Therefore, this qualifies as an AI Hazard, reflecting a credible potential for future harm from the deployment and use of these AI-enabled autonomous strike UAVs.
Thumbnail Image

TSK'ya yeni soluk! ALPAGU göreve hazır

2022-07-01
Ak�am
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used in the KERKES system for autonomous navigation without GPS, which is critical for UAV operation. The ALPAGU is an armed UAV capable of autonomous targeting. Although no harm has yet occurred, the deployment of such AI-enabled weapon systems inherently carries a plausible risk of causing injury, disruption, or other harms. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication of realized harm or incident yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the introduction of AI-enabled military systems with potential for harm.
Thumbnail Image

Türkiye'den 'kamikaze' hamlesi! Dünyada sayılı: Anında yok ediyor

2022-07-01
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The ALPAGU UAV and KERKES system involve AI systems for autonomous navigation and target engagement in military drones. Their development and intended use in lethal operations could plausibly lead to injury or death, constituting harm under the AI Incident definition. However, the article reports successful tests and planned deployment without describing any actual harm or incidents yet. Therefore, this event is best classified as an AI Hazard, reflecting the credible potential for harm from these AI-enabled autonomous weapons.