Bayraktar TB2T-AI Armed Drone Debuts with AI-Driven Autonomy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Baykar has begun test flights of its new Bayraktar TB2T-AI armed drone, integrating a turbo engine and advanced AI for autonomous navigation and target engagement. The UAV reached a record 30,318 feet in under 30 minutes and 160 knots (300 km/h), offering enhanced high-altitude endurance and combat potential.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Bayraktar TB2T-AI is an AI-enabled armed drone with autonomous combat capabilities, including target detection and autonomous navigation. While the article reports on test flights and capabilities, it does not describe any actual harm or incident caused by the AI system. However, the nature of the system as an autonomous weapon with AI means it could plausibly lead to harms such as injury, death, or violations of human rights in future use. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impacts.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Research and developmentManufacturing

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Turbo motorlu yeni Bayraktar TB2T-AI SİHA test uçuşlarına başladı

2025-02-22
En Son Haber
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-enabled armed drone with autonomous combat capabilities, including target detection and autonomous navigation. While the article reports on test flights and capabilities, it does not describe any actual harm or incident caused by the AI system. However, the nature of the system as an autonomous weapon with AI means it could plausibly lead to harms such as injury, death, or violations of human rights in future use. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impacts.
Thumbnail Image

Turbo motorlu yeni Bayraktar TB2T-AI SİHA test uçuşlarına başladı

2025-02-22
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-enabled armed drone system with autonomous capabilities for combat operations. While no harm has yet occurred as the article focuses on test flights, the AI system's development and intended use in military operations could plausibly lead to significant harms such as injury, loss of life, or violations of human rights. The autonomous decision-making and navigation features increase the risk profile. Hence, this event is best classified as an AI Hazard, reflecting the credible potential for future harm stemming from the AI system's deployment.
Thumbnail Image

Turbo motorlu yeni Bayraktar TB2T-AI SİHA test uçuşlarına başladı

2025-02-22
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an armed drone with advanced AI capabilities) whose development and testing are described. Although no harm has yet occurred, the nature of the system and its military application imply a credible risk of future harm, such as injury or violations of rights, if deployed. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential for harm are central to the report.
Thumbnail Image

Gök vatan'da iki gurur

2025-02-23
Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI support in the new version of the Bayraktar TB2T-AI armed drone and the autonomous capabilities of the Bulut UAV. These are AI systems used in military and security contexts, which inherently carry risks of harm including injury, rights violations, or other significant harms. However, the article only discusses testing and deployment without any reported incidents or harms occurring yet. Thus, it does not meet the criteria for an AI Incident but fits the definition of an AI Hazard because the development and use of these AI-enabled armed and surveillance drones could plausibly lead to harm in the future.
Thumbnail Image

Bayraktar TB2T-AI SİHA gökyüzünde! Turbo motor ve yapay zeka birleşti

2025-02-22
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-enabled armed drone system with autonomous capabilities in combat, which clearly involves an AI system. The article focuses on its development and testing, with no mention of any actual harm or incidents caused by it so far. However, armed autonomous drones have a high potential for causing injury, violation of human rights, and harm to communities if used in conflict. The mere development and testing of such AI-powered weapon systems constitute a plausible risk of future harm, fitting the definition of an AI Hazard. Since no harm has yet occurred, it is not an AI Incident. It is not Complementary Information because the article is not about responses or updates to a prior incident, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

Yapay zeka ve turbo motorla güçlenen BAYRAKTAR TB2T-AI SİHA gökyüzünde

2025-02-22
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI systems integrated into a military drone capable of autonomous flight and combat functions. Although no direct or indirect harm is reported, the nature of the AI system—an armed autonomous drone—implies a credible risk of causing injury, human rights violations, or other significant harms in future military engagements. The development and deployment of such AI-enabled weapons systems are recognized as AI Hazards because they could plausibly lead to AI Incidents involving physical harm or rights violations. Since no actual harm is described, the classification as AI Hazard is appropriate rather than AI Incident.
Thumbnail Image

Gelişmiş yapay zekaya sahip Bayraktar TB2T-AI SİHA'nın test uçuşlarına başlandı

2025-02-22
Dünya
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as providing advanced autonomous capabilities in a military drone. Although no harm has yet occurred, the nature of the system—an armed drone with AI for autonomous combat and navigation—implies a credible risk of future harm, including injury or violations of rights in conflict zones. The article focuses on the development and testing phase, with no indication of actual incidents or harm. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Savaşta dengeleri değiştirecek! İlk milli SİHA artık daha güçlü

2025-02-22
Haberler
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-equipped armed drone system used in active military operations, including combat zones such as Libya, Ukraine, and others. The AI system enables autonomous flight, target identification, and mission execution, which directly contributes to harm in warfare contexts (injury, death, disruption). The article highlights its operational use and combat roles, indicating realized harm rather than just potential. Hence, this qualifies as an AI Incident due to the direct involvement of AI in systems causing harm in armed conflict. Although the article also discusses export and commercial success, the primary focus is on the AI system's deployment in military operations causing harm, meeting the criteria for AI Incident.
Thumbnail Image

Turbo motorlu yeni Bayraktar TB2T-AI SİHA test uçuşlarına başladı - Ankara Haberleri

2025-02-22
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as integrated into a military drone with autonomous capabilities. The use of AI in an armed UAV with autonomous target analysis and navigation directly relates to potential harms such as injury or harm to persons (a), disruption of critical infrastructure or military operations (b), and broader security risks. Although the article reports only on test flights and does not mention any realized harm, the development and deployment of such AI-enabled armed drones plausibly could lead to significant harm in the future. Therefore, this event qualifies as an AI Hazard due to the credible risk posed by the autonomous weapon system's capabilities.
Thumbnail Image

Turbo motorlu yeni Bayraktar gökyüzünde! TB2T-AI SİHA test uçuşlarına başladı

2025-02-22
NTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system integrated into an armed drone with autonomous combat capabilities, which clearly fits the definition of an AI system. The article reports ongoing test flights and enhanced capabilities but does not describe any realized harm or incident resulting from its use or malfunction. The potential for harm is credible given the nature of the system (armed autonomous drone), so this constitutes an AI Hazard rather than an AI Incident. There is no indication of complementary information or unrelated content.
Thumbnail Image

Bayraktar gelişmiş yapay zeka ve turbo motorla yeniden doğdu: Test uçuşunda rekor kırdı!

2025-02-22
NTV
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI system integrated into a military drone with autonomous capabilities. The article focuses on its test flights and enhanced AI features but does not mention any incidents or harms caused by the system. Since the system is an armed autonomous drone, its development and deployment plausibly could lead to harms such as injury or violations of rights in combat or conflict situations. The event thus fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Turbo motorlu yeni Bayraktar TB2T-AI SİHA gökyüzünde!

2025-02-22
CNN Türk
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI system as it incorporates advanced AI for autonomous flight, navigation, target detection, and decision-making in combat scenarios. While no actual harm or incident is reported, the system's intended use as an armed drone with autonomous lethal capabilities means it could plausibly lead to injury, harm to people, or harm to communities. The article focuses on the development and testing phase, highlighting the system's capabilities and potential battlefield impact, but does not describe any realized harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Turbo motor ve yapay zeka: Bayraktar TB2T-AI SİHA artık daha güçlü

2025-02-23
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves an AI system integrated into a military drone with autonomous capabilities, which could plausibly lead to harm given its armed nature and operational use in combat. However, since no harm or incident has occurred or is reported, and the article focuses on the development and testing phase, this constitutes a potential risk rather than realized harm. Therefore, it qualifies as an AI Hazard due to the plausible future harm from the use of AI in an armed drone system.
Thumbnail Image

Yapay Zeka İle Yenilendi: Bayraktar TB2T-AI Gökyüzünde!

2025-02-23
tamindir.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system integrated into a military drone with autonomous capabilities. While the article does not report any harm or incident resulting from its use, the presence of AI in an armed drone with enhanced autonomous combat functions poses a plausible risk of future harm, such as injury, violation of rights, or harm to communities. Therefore, this qualifies as an AI Hazard due to the credible potential for harm inherent in the AI system's intended military application.
Thumbnail Image

Bayraktar TB2T-AI SİHA Testte Rekor Kırdı

2025-02-22
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-equipped armed drone with autonomous capabilities such as visual navigation, target analysis, and automatic safe return. Although the article focuses on its technical achievements and test records without reporting any actual harm, the nature of the system as an autonomous weapon implies a credible risk of causing injury, death, or other harms in combat. The development and deployment of such AI systems are recognized as AI Hazards due to their potential to lead to serious incidents. Since no actual harm or incident is described, this is not an AI Incident but an AI Hazard.
Thumbnail Image

Yapay zeka ve turbo motorla güçlendi! Gök vatanın yeni savunucusu TB2T

2025-02-22
Ak�am
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI system as it uses advanced AI for autonomous navigation, target detection, and decision-making in a military UAV. The article focuses on its development, capabilities, and deployment but does not describe any incident where harm has occurred. However, the use of AI in armed drones with autonomous combat functions plausibly could lead to harms such as injury, violations of human rights, or other significant harms. Since no actual harm is reported but the potential for harm is credible and inherent in the system's use, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with significant implications.
Thumbnail Image

Bayraktar TB2T-AI SİHA test uçuşlarında irtifa rekoruna imza attı

2025-02-22
Dünya
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI SİHA is explicitly described as an AI system with autonomous capabilities in combat scenarios. Although no harm or incident is reported during the test flights, the nature of the system as an armed drone with AI-driven autonomous functions implies a credible risk of harm in future use. The article does not describe any realized injury, rights violation, or other harm, so it is not an AI Incident. Instead, it fits the definition of an AI Hazard because the development and testing of such an AI-enabled weapon system could plausibly lead to significant harms in the future.
Thumbnail Image

Bayraktar TB2'de devrim gibi yenilik... Sınıfının dünyadaki en iyisi olacak

2025-02-22
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system integrated into an armed drone capable of autonomous combat operations. Although no actual harm is reported, the nature of the system and its intended military use imply a credible risk of future harm, such as injury, loss of life, or violations of human rights. The article focuses on the capabilities and successful testing of the AI-enabled UAV, not on any incident of harm. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Bayraktar TB2T-AI gökyüzüyle buluştu: Muharebe sahasında yeni dönem!

2025-02-22
Türkiye
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is an AI-enabled autonomous combat drone with advanced capabilities that can directly impact physical environments and human safety in warfare. The article focuses on its development and deployment, highlighting AI systems for autonomous operation and combat effectiveness. While no actual harm or incident is reported, the nature of the system and its intended use in military operations imply a credible risk of future harm, including injury, death, or disruption. According to the OECD framework, the development and introduction of AI-powered autonomous weapons with high potential for misuse or harm constitute an AI Hazard. Since no specific harm has yet occurred, this event is best classified as an AI Hazard.
Thumbnail Image

Turbo Motorlu Yeni Bayraktar Gökyüzünde!

2025-02-22
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into a military drone capable of autonomous flight, target analysis, and combat operations. Although no harm or incident is reported, the nature of the system as an AI-enabled weapon platform implies a credible risk of future harm, such as injury, violation of rights, or disruption in conflict zones. According to the OECD framework, the mere development and testing of AI-powered autonomous weapons with high potential for misuse or harm qualifies as an AI Hazard. Since no actual harm has occurred or is reported, this is not an AI Incident. It is not Complementary Information because the article focuses on the new system's capabilities and testing rather than responses or governance. It is not Unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

Bayraktar TB2T-AI SİHA gökyüzünde

2025-02-24
Erzurum Gazetesi
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2T-AI is explicitly described as an AI-enabled autonomous armed drone used in combat and other operations. The AI system's role in autonomous navigation, target detection, and decision-making is central. Although no specific harm or incident is reported, the nature of the system as an autonomous weapon platform implies a credible risk of causing injury, violations of rights, or other harms in the future. The article focuses on the system's capabilities and deployment rather than any realized harm or incident. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm.
Thumbnail Image

Turbo motorlu yeni Bayraktar gökyüzünde! TB2T-AI SİHA test uçuşlarına başladı

2025-02-22
TV100
Why's our monitor labelling this an incident or hazard?
The event involves the development and testing of an AI-enabled armed drone system, which is an AI system by definition. Although no incident or harm has occurred yet, the nature of the system (armed drone with AI capabilities) and its enhanced operational features plausibly could lead to harms such as injury, disruption, or violations of rights in the future. Therefore, this event qualifies as an AI Hazard due to the credible potential for harm inherent in the system's capabilities and intended use.