AI-Guided KEMANKEŞ 1 Missile Successfully Tested by Baykar and AKINCI Drone

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Baykar's AI-powered KEMANKEŞ 1 mini cruise missile, integrated with the Bayraktar AKINCI drone, successfully destroyed aerial targets in recent tests. The AI system autonomously identified and struck moving targets with high precision, demonstrating the operational capability and potential risks of lethal autonomous weapon systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-supported targeting and autonomous flight in the KEMANKEŞ 1 missile integrated with the Bayraktar AKINCI UAV. The missile autonomously identifies and destroys air targets, which involves AI system use leading to harm (destruction of targets). Although the article describes a test, the system is operational and intended for combat use, implying realized harm potential. Autonomous weapon systems with AI that can independently engage targets are recognized as AI systems whose use can cause injury or harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a concrete demonstration of AI-enabled weaponry capable of harm.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomyRobustness & digital securityHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Economic/PropertyPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

AKINCI'dan tam isabet: Hava hedeflerini KEMANKEŞ 1 ile vurdu

2025-06-28
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-supported targeting and autonomous flight in the KEMANKEŞ 1 missile integrated with the Bayraktar AKINCI UAV. The missile autonomously identifies and destroys air targets, which involves AI system use leading to harm (destruction of targets). Although the article describes a test, the system is operational and intended for combat use, implying realized harm potential. Autonomous weapon systems with AI that can independently engage targets are recognized as AI systems whose use can cause injury or harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a concrete demonstration of AI-enabled weaponry capable of harm.
Thumbnail Image

Bayraktar Akıncı'dan atılan KEMANKEŞ 1 füzesinden tam isabet!

2025-06-28
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile system that has been tested successfully to destroy air targets with high precision. The use of AI in autonomous targeting and flight control directly leads to harm through the missile's destructive capability. The article confirms the system's operational status and successful test firings, indicating realized harm potential inherent in its deployment. This meets the criteria for an AI Incident as the AI system's use has directly led to harm potential and is a weapon system with lethal effects. Although the article focuses on testing and export success, the AI system's role in enabling autonomous lethal strikes is central and constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Baykar'dan Yenilik: KEMANKEŞ 1 Füzesi Başarıyla Test Edildi

2025-06-28
Haberler
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is explicitly described as an AI-based system with autonomous flight and targeting capabilities, which was successfully tested in a scenario involving destruction of air targets. The use of AI in autonomous weaponry that can kill or destroy targets constitutes a direct link to potential harm (injury or death) to persons or groups, fulfilling the criteria for an AI Incident. The article reports the successful test (use) of this AI system in a lethal context, so it is not merely a hazard or complementary information but an AI Incident due to the realized capability and use of AI in a weapon system with lethal effects.
Thumbnail Image

Bayraktar AKINCI'dan atılan KEMANKEŞ 1 hava hedeflerini tam isabetle vurdu

2025-06-28
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile with target recognition and autonomous flight capabilities. The missile was tested successfully, hitting targets precisely, but no actual harm or incident occurred during the test. However, the nature of the AI system—a lethal autonomous weapon—implies a credible risk of future harm, including injury or death, if deployed in combat. The article focuses on the development and testing phase, not on an incident causing harm. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

AKINCI'dan gövde gösterisi! Hava hedeflerini KEMANKEŞ 1 ile tam isabetle vurdu

2025-06-28
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-supported targeting and autonomous flight system of the KEMANKEŞ 1 missile) used in a military context to destroy targets. While no actual harm is reported in the test, the AI system's development and demonstrated capabilities plausibly lead to future harms such as injury, death, or escalation of conflict. The article focuses on the successful test and capabilities rather than an incident causing harm, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the main subject is the AI system's use and its potential impact. Hence, it is best classified as an AI Hazard.
Thumbnail Image

Milli SİHA'ların vurucu gücü: KEMANKEŞ 1 ilk görevini tamamladı

2025-06-28
NTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based missile) used in a military context. The missile's AI enables autonomous targeting and engagement of aerial targets. While the test was successful and no harm to people or property is reported, the development and deployment of AI-enabled autonomous weapons systems inherently carry significant risks of harm, including injury, loss of life, and property damage in real operational use. However, since this report only describes a successful test without any actual harm or incident occurring, it does not qualify as an AI Incident. Instead, it represents a plausible risk of future harm due to the AI system's intended use in lethal autonomous weaponry, qualifying it as an AI Hazard.
Thumbnail Image

Bayraktar AKINCI'dan atılan KEMANKEŞ 1 hava hedeflerini tam isabetle vurdu

2025-06-28
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-supported autonomous targeting system integrated into a missile and drone platform. The use of AI in autonomous weapons capable of striking strategic targets with high precision directly relates to potential harm, including injury or death and disruption in conflict scenarios. Although the article reports a successful test rather than an incident causing harm, the development and deployment of such AI-enabled autonomous weapons constitute a credible and significant risk of harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI system's use in lethal autonomous weaponry.
Thumbnail Image

Bayraktar AKINCI'dan atılan KEMANKEŞ 1 hava hedeflerini tam isabetle vurdu

2025-06-28
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based targeting and autonomous flight system of the KEMANKEŞ 1 missile) developed and tested for military use. However, the article only reports successful tests and demonstrations without any mention of harm, injury, violation of rights, or damage caused by the AI system. The AI system's use is described in a positive, developmental context without any realized harm. Given the nature of the system as an autonomous weapon, there is a plausible risk of future harm if deployed in combat, but the article does not describe any such incident or near miss. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the AI-enabled autonomous weapon system, but not an AI Incident or Complementary Information.
Thumbnail Image

Bayraktar AKINCI'dan atıldı! KEMANKEŞ 1'den hava hedeflerine tam isabet

2025-06-28
Haber 7
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile with target recognition and precision strike capabilities. While the test was successful and no harm occurred during the test, the system's intended use as a lethal weapon means it could plausibly lead to injury, death, and destruction in real combat scenarios. The article focuses on the development and successful testing of this AI-enabled weapon system, highlighting its strategic impact and export potential. Since no actual harm has occurred yet but the potential for significant harm is credible and inherent in the system's design and use, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Bayraktar AKINCI'dan atılan KEMANKEŞ 1 hava hedeflerini tam isabetle vurdu | Video

2025-06-28
Sabah
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile with target recognition and autonomous flight capabilities. The use of this AI system in a military context to destroy air targets directly relates to harm (a) injury or harm to persons or groups, and (d) harm to property and communities, as these weapons are designed for lethal effects. Since the article reports a successful test firing with confirmed destruction of targets, the harm is realized in the context of military operations or testing. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm through autonomous weapon use.
Thumbnail Image

Bayraktar Akıncı'dan atılan Kemankeş 1 hava hedeflerini tam isabetle vurdu

2025-06-28
CNN Türk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile with target recognition and autonomous flight capabilities. The use of such AI-enabled autonomous weapons systems inherently involves direct potential for harm to persons and property, fulfilling the criteria for an AI Incident. The article reports successful tests where the missile hit air targets with precision, indicating realized capability to cause harm. Therefore, this is an AI Incident due to the AI system's use leading to harm potential in military operations.
Thumbnail Image

Bayraktar AKINCI'dan ilk Kemankeş 1 atışı başarıyla tamamlandı

2025-06-29
takvim.com.tr
Why's our monitor labelling this an incident or hazard?
The Kemankeş 1 missile is explicitly described as AI-based and capable of autonomous flight and target engagement, indicating the involvement of an AI system. The event involves the use of this AI system in a military context where it successfully destroyed targets. This constitutes the use of an AI system leading directly to harm (destruction of targets, presumably military assets). Given the military weapon context and autonomous lethal action, this qualifies as an AI Incident due to direct harm caused by the AI system's operation.
Thumbnail Image

Kemankeş 1 hedefi 12'den vurdu! Gökyüzünde test edildi

2025-06-28
Yeni Çağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The Kemankeş 1 missile system clearly involves an AI system, as it uses AI-supported autonomous targeting and flight control. The event is a test and demonstration, with no reported injury, damage, or violation of rights occurring yet. However, given the nature of the system as a weapon capable of autonomous target destruction, it represents a credible potential for harm in future use. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving harm in military contexts, but no harm has yet occurred in this test event.
Thumbnail Image

Bayraktar AKINCI'dan ilk kez ateşlendi: KEMANKEŞ 1 ile tam isabet! Milli SİHA'ların vurucu gücü | Savunma Sanayi Haberleri

2025-06-28
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-based missile) in a military context where the AI system's use directly leads to the destruction of targets, which constitutes harm to property and potentially to communities or persons in real-world deployment. Although this is a test and no actual harm to people or property beyond the test targets is reported, the use of AI in lethal autonomous weaponry is considered a significant harm category due to the potential for injury or death. Since the event reports a successful firing and destruction of targets, it qualifies as an AI Incident involving harm caused by the AI system's use.
Thumbnail Image

Bayraktar AKINCI'dan atılan KEMANKEŞ 1 füzesinden tam isabet | VİDEO İZLE

2025-06-28
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-based targeting and autonomous flight control) in a military weapon system that has directly caused destruction of targets during a live test. The use of AI in autonomous lethal weapons raises significant concerns about potential harm, including injury or death, and broader implications for human rights and international law. Although this is a test and no unintended harm is reported, the deployment and use of such AI-enabled autonomous weapons systems inherently carry risks of harm. Given the direct use of AI in a weapon system capable of causing injury or death, this event qualifies as an AI Incident under the definition of harm to persons or groups through AI system use.
Thumbnail Image

KEMANKEŞ 1 seyir füzesi özellikleri menzili ve vuruş kapasitesi | Özgün Haberler

2025-06-29
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile incorporates an AI system for target recognition and guidance, which is explicitly mentioned. Although no specific harm has been reported yet, the AI system's use in a lethal weapon system capable of autonomous or semi-autonomous operation plausibly could lead to injury or death, qualifying it as an AI Hazard. Since the article does not report any actual harm or incident but highlights the capabilities and potential use of the AI system in a weapon, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Bu ikili bir harika! AKINCI ve KEMANKEŞ ortaklığı hedefi 12'den vurdu

2025-06-28
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile with target recognition and precision strike capabilities. The use of this AI system in a military context directly leads to harm by destroying targets, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to persons or groups (in this case, potential combatants or targets). The article reports successful tests where the AI system performed as intended, demonstrating realized harm potential inherent in its use. Hence, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

AKINCI TİHA'ya yapay zeka desteği: Hedeflere tam isabet!

2025-06-28
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military weapon (autonomous missile with AI-based targeting and autopilot) that has been tested successfully to destroy targets with high precision. This is a direct use of AI in a lethal autonomous weapon system, which inherently carries risks of harm to persons and communities if deployed in conflict. The article reports a completed test with successful target destruction, indicating realized capability rather than just potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in a weapon system capable of causing injury or harm to persons or groups, fulfilling the harm criteria (a).
Thumbnail Image

Bayraktar AKINCI, hava hedeflerini KEMANKEŞ ile vurdu

2025-06-28
Merhaba Haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based targeting and autonomous flight system of the KEMANKEŞ 1 missile) used in a military context to destroy aerial targets. The AI system's development and use are central to the event. Although the test was successful, no actual harm (injury, death, or property damage) has been reported as having occurred yet. The AI system's role is pivotal in enabling autonomous lethal strikes, which inherently carry significant risk of harm. Therefore, the event represents a plausible future risk of harm from AI-enabled autonomous weapons, classifying it as an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the test event itself, not on responses or governance measures. It is not Unrelated because the AI system is explicitly involved and the event concerns potential harm from its use.
Thumbnail Image

KemanKeş 1 Başarılı Testler Geçiriyor

2025-06-28
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous missile system with target recognition and autonomous flight capabilities. The article focuses on successful tests and the system's potential to change battlefield dynamics, implying future use in military operations. Although no harm has yet occurred, the nature of the AI system as an autonomous weapon with lethal capabilities means it could plausibly lead to harms such as injury or violations of human rights. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as no realized harm is reported but credible future harm is plausible.
Thumbnail Image

KemanKeş 1 Uçuş Testi Başarıyla Tamamlandı

2025-06-28
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based autonomous targeting and flight control in a missile system that can destroy air targets with high precision. Although the test was successful and no harm occurred during the test, the AI system's development and deployment in lethal autonomous weapons pose a credible risk of causing injury or death in future use. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm. There is no indication of realized harm or malfunction causing harm, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI-enabled weapon system's capabilities and test results with implications for future harm potential.
Thumbnail Image

Hava Hedefleri KEMANKEŞ 1 İle İmhaya Uğradı

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-supported autonomous targeting and missile system integrated into an armed drone. The system has been used to successfully destroy air targets, demonstrating its lethal capability. This constitutes direct use of AI in a weapon system causing harm, fulfilling the criteria for an AI Incident due to injury or harm to persons or groups. The article reports actual use and testing, not just potential or future risk, so it is not merely a hazard. It is not complementary information or unrelated, as the AI system's use directly leads to harm potential inherent in autonomous weapons.
Thumbnail Image

Bayraktar Akıncı, Kemanekeş 1'le başarılara imza atıyor

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-supported autonomous targeting and flight system integrated into the missile and UAV) in a military context where it has successfully completed a test involving the destruction of air targets. The AI system's development and use are directly linked to the capability to autonomously identify and destroy targets, which is a form of harm (harm to property and potentially to persons in conflict scenarios). Although this is a test and not a combat incident, the deployment of such AI-enabled autonomous weapons systems inherently carries a direct risk of harm and is considered an AI Hazard due to the plausible future harm from their use in warfare. Since the article reports successful testing and no actual harm or incident has yet occurred, it is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

KEMANKEŞ 1, Hava Hedeflerini Başarıyla Vurdu

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-supported targeting and autopilot system used in autonomous missiles. The use of this AI system has directly led to the destruction of air targets during testing, which is a form of harm to property and potentially to communities in a military conflict context. Although the article focuses on a successful test rather than an accident or malfunction, the deployment and use of AI-enabled autonomous weapons capable of lethal strikes constitute an AI Incident due to the realized harm potential inherent in their use. The AI system's role is pivotal in enabling autonomous lethal targeting and destruction, which meets the criteria for an AI Incident under harm to property and communities.
Thumbnail Image

Bayraktar Akıncı, Hava Hedeflerini Keman Keş 1'le Vurdu

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-supported targeting and autopilot system in the Bayraktar AKINCI UAV and KEMANKEŞ 1 missiles) in a military context where the system has been tested to autonomously destroy air targets. This clearly involves AI system use leading to harm potential (lethal force against targets). The article reports successful tests, implying the system is operational and capable of causing harm. Therefore, this qualifies as an AI Incident because the AI system's use directly leads to harm (destruction of targets) and the system is deployed or near deployment in real-world military applications.
Thumbnail Image

Hava Hedefleri KEMANKEŞ 1 ile İmha Edildi

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system embedded in a military drone and missile system that autonomously identifies and destroys air targets. This is a direct use of AI in a weapon system capable of causing harm to persons or groups in conflict, thus fitting the definition of an AI Incident due to the direct link between AI use and potential injury or harm. Although the article focuses on testing and capabilities rather than a specific harmful event, the deployment and use of AI-enabled autonomous weapons systems inherently involve realized or imminent harm potential in military contexts, qualifying it as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Batı ve Çin'e karşı Türkiye'yi işaret ettiler: Ankara en güvenli alternatif

2025-07-01
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile uses an AI-supported targeting system enabling autonomous flight and precise destruction of aerial targets, which is a clear example of an AI system in use. The article describes a successful test where the missile hit targets with high precision, indicating the AI system's operational effectiveness. Autonomous lethal weapons systems inherently pose risks of injury or death, fulfilling the harm criteria. The event is not merely a potential risk but a realized use of AI in a weapon system capable of causing harm, thus constituting an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bayraktar Akıncı, Keman Keş 1 Vurdu

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The Bayraktar AKINCI UAV uses an AI-supported autonomous targeting system to identify and destroy targets with high precision. While the article reports successful tests, it does not mention any actual harm or incidents caused by the system in operational use. However, the autonomous weapon system's development and deployment clearly pose a credible risk of harm due to its lethal capabilities and autonomous operation. According to the definitions, the development and use of AI-enabled autonomous weapons with lethal potential constitute an AI Hazard because they could plausibly lead to injury or harm to persons or groups. Since no harm has yet occurred, this is not an AI Incident. The article is not merely complementary information because it focuses on the weapon system's capabilities and testing, not on responses or governance. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Bayraktar Akıncı Tıha, Kemankeş 1'le vurdu

2025-06-29
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-supported targeting and autopilot system in the missile and drone) used in a military weapon system. While the system's intended use is lethal and could plausibly lead to harm, the article only reports successful tests without any harm or malfunction. Therefore, it does not meet the criteria for an AI Incident (no harm realized) but does represent a credible potential for harm due to the autonomous weapon capabilities. According to the definitions, the development and testing of AI-enabled autonomous weapons with lethal capabilities constitute an AI Hazard because they could plausibly lead to harm in the future. Hence, the classification is AI Hazard.
Thumbnail Image

Yapay zekâlı KEMANKEŞ'ten kaçış yok

2025-07-05
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI-supported optical guidance system) in a weaponized context where the AI's development and use directly lead to harm through precise targeting and destruction of military targets. This constitutes an AI Incident because the AI system's role is pivotal in causing harm (destruction of targets), fulfilling the criteria of harm to property and potentially injury or harm to persons. The article reports actual use and testing, not just potential or hypothetical risks, so it is not merely a hazard or complementary information.
Thumbnail Image

AKINCI TİHA'dan KEMANKEŞ füzesi ile hedefe tam isabet

2025-07-04
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous weaponry (AI-supported guidance and autopilot in missiles). Although no harm or incident is reported, the development and testing of such AI-enabled weapons systems inherently carry the potential to cause injury, harm to communities, or disruption in conflict situations. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the deployment of these AI-integrated weapons.
Thumbnail Image

Önce havada şimdi karada! Kemankeş 1, yine hedefi 12'den vurdu

2025-07-04
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 is explicitly described as an AI-based weapon system (an AI system) used in military tests to destroy targets. While the tests were successful and no harm or damage beyond the intended military testing is reported, the development and use of an AI-enabled missile system inherently carries a plausible risk of harm if deployed in conflict or misuse. Therefore, this event represents a credible potential for harm due to the AI system's capabilities and intended use as a weapon, qualifying it as an AI Hazard rather than an Incident since no actual harm beyond controlled testing is described.
Thumbnail Image

Baykar'ın KEMANKEŞ 1 Füzesi Hedefleri Tam İsabetle İmha Etti

2025-07-04
Haberler
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is explicitly described as an AI-based system with autonomous targeting and flight capabilities. Its use in successfully destroying ground targets demonstrates direct harm to property and potential harm to communities or persons in real-world applications. The article reports on actual tests where the missile destroyed targets, indicating realized harm rather than just potential. The AI system's development and use are central to the event, fulfilling the criteria for an AI Incident under the OECD framework. Although the article does not describe combat use, the missile's function inherently involves harm caused by AI-enabled autonomous weaponry, which is a recognized category of AI Incident.
Thumbnail Image

KEMANKEŞ 1'den karada da tam isabet

2025-07-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into autonomous weapons (mini smart cruise missiles) that have been successfully tested to strike targets with high precision. While no actual harm or incident is reported, the nature of the system—AI-enabled autonomous lethal weapons—poses a plausible risk of causing injury, death, and destruction in future use. According to the OECD framework, the development and deployment of AI-enabled autonomous weapons with lethal capabilities constitute an AI Hazard due to the credible potential for harm. There is no indication of realized harm or incident yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and testing relevant to potential harm.
Thumbnail Image

KEMANKEŞ 1 bir testi daha tamamladı: Yer hedefini vurdu

2025-07-04
NTV
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 is an AI system as it is described as an AI-based autonomous missile system capable of targeting and destroying air and ground targets. The event reports successful tests, indicating the system's use and operational capability. While no harm is reported as having occurred during these tests, the deployment and use of AI-enabled autonomous weapons systems inherently carry significant risks of harm to people and property. Therefore, the event plausibly leads to AI incidents in the future if such weapons are used in conflict or other scenarios. Given the nature of the system and its potential for harm, this event qualifies as an AI Hazard rather than an AI Incident, as no actual harm has been reported yet, only successful testing.
Thumbnail Image

Yapay zeka tanımlı mini seyir füzesi Kemankeş 1 hedefi 12'den böyle vurdu

2025-07-04
İnternethaber
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is explicitly described as AI-based, employing AI for autonomous navigation, target recognition, and precision strikes. Its successful test involved destroying ground targets, which is a direct harm to property and potentially to communities if used in conflict. The AI system's development and use have directly led to realized harm through the missile's destructive capability. This fits the definition of an AI Incident, as the AI system's use has directly led to harm (destruction of targets). The article does not merely discuss potential harm or future risks but reports on actual tests where the AI system was used to destroy targets, confirming realized harm.
Thumbnail Image

Havadan sonra karada da tam isabet: KEMANKEŞ 1 yapay zeka destekli füzeyle hedefi on ikiden vurdu

2025-07-04
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is an AI system as it uses AI-based target recognition and guidance to achieve precise strikes. Its use in military operations and successful tests indicate the deployment of an AI system capable of causing physical harm to targets. Given that the missile is designed to destroy strategic targets and has been successfully tested to hit targets with precision, it directly relates to potential harm to persons or property. Although the article describes tests rather than actual combat use, the system's deployment and capabilities imply a direct link to harm through its intended use. Therefore, this event qualifies as an AI Incident due to the AI system's use leading to or enabling harm through military strike capabilities.
Thumbnail Image

Önce hava şimdi karada! KEMANKEŞ 1, AKINCI'dan ateşlendi

2025-07-04
Ak�am
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is explicitly described as an AI-based system with autonomous targeting and flight capabilities. Its successful test firings that destroyed targets demonstrate realized harm to property and potential harm to communities or persons in operational use. The AI system's development and use directly lead to harm through autonomous lethal force application. This fits the definition of an AI Incident, as the AI system's use has directly led to harm (destruction of targets). Although the article focuses on testing and export success, the core event is the use of an AI system causing harm, not just a product announcement or complementary information.
Thumbnail Image

Yerli ve milli akılla 200 kilometreden vuruyor! Türkiye KEMANKEŞ'le dengeleri değiştirecek

2025-07-04
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-based autonomous cruise missile with target recognition and autonomous flight capabilities. The missile has been successfully tested in destroying ground and air targets, demonstrating realized use of AI in lethal military applications. The use of such AI-enabled weapons directly leads to harm to persons and property in conflict scenarios, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or future hazards but reports actual deployment and testing of the AI system in a way that implies operational use and harm potential. Hence, it is not a hazard or complementary information but an AI Incident.
Thumbnail Image

KEMANKEŞ 1 hedefi şaşmadı! Havadan sonra karadan da tam isabet!

2025-07-04
TV100
Why's our monitor labelling this an incident or hazard?
The KEMANKEŞ 1 missile is explicitly described as AI-based and used for targeting and destroying physical targets. The event reports successful tests where targets were destroyed, demonstrating the AI system's lethal capability. This directly relates to harm to persons or property if used in conflict, fulfilling the criteria for an AI Incident. The AI system's use in a weapon system that can cause injury or death is a direct link to harm. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.