Ukraine Deploys AI-Enabled SAKER SCOUT Drones in Military Operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Ukrainian military has approved the use of SAKER SCOUT drones equipped with artificial intelligence. These drones autonomously identify and relay enemy equipment coordinates, including camouflaged targets, to command centers, supporting reconnaissance and attack missions. Their deployment in active conflict zones raises credible risks of harm due to autonomous targeting capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the deployment and use of an AI system (the SAKER SCOUT drone) that autonomously performs reconnaissance and targeting functions. The AI system's use in military operations, including coordination with kamikaze drones, implies a direct role in potential harm scenarios such as armed conflict. Although no specific harm is reported as having occurred yet, the AI system's use in military targeting plausibly could lead to injury or harm to persons or groups, qualifying this as an AI Hazard. There is no indication that harm has already occurred or that an incident has taken place, so it is not classified as an AI Incident. The event is more than general AI news or product launch, as it concerns the operational deployment of AI in a military context with plausible future harm.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomyPrivacy & data governance

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Міноборони допустило до експлуатації дрон з ШІ

2023-09-04
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
The event involves the deployment and use of an AI system (the SAKER SCOUT drone) that autonomously performs reconnaissance and targeting functions. The AI system's use in military operations, including coordination with kamikaze drones, implies a direct role in potential harm scenarios such as armed conflict. Although no specific harm is reported as having occurred yet, the AI system's use in military targeting plausibly could lead to injury or harm to persons or groups, qualifying this as an AI Hazard. There is no indication that harm has already occurred or that an incident has taken place, so it is not classified as an AI Incident. The event is more than general AI news or product launch, as it concerns the operational deployment of AI in a military context with plausible future harm.
Thumbnail Image

У Міноборони показали український дрон зі штучним інтелектом SAKER SCOUT

2023-09-04
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI algorithms for autonomous reconnaissance and targeting support in a military context. The use of this AI system in active military operations implies direct involvement in causing harm to enemy forces, which qualifies as injury or harm to persons and harm to property. The article reports on the system's deployment and production, indicating active use rather than potential future harm. Hence, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to harm in the context of armed conflict.
Thumbnail Image

У ЗСУ з'явились дрони-розвідники зі штучним інтелектом

2023-09-04
Львівський портал
Why's our monitor labelling this an incident or hazard?
The Saker Scout drone system uses AI algorithms for autonomous reconnaissance and target identification, which directly supports military operations. Its deployment and active use in combat imply that the AI system's outputs influence decisions that can cause harm to enemy forces. This constitutes an AI Incident because the AI system's use is directly linked to harm in a conflict context (harm to persons/groups through military action).
Thumbnail Image

Міноборони допустило до експлуатації у ЗСУ дрон зі штучним інтелектом

2023-09-04
unian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI algorithms for autonomous reconnaissance and targeting in military drones. Although no actual harm is reported, the use of AI in armed drones capable of autonomous operations inherently carries a credible risk of causing injury, death, or other serious harms. The article highlights the system's operational approval and anticipated production scale, indicating a plausible future risk. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Міноборони показало український дрон Saker Scout зі штучним інтелектом

2023-09-04
novyny.online.ua
Why's our monitor labelling this an incident or hazard?
The SAKER SCOUT drone is explicitly described as an AI system used by the military to autonomously detect and target enemy equipment, which directly contributes to harm in armed conflict. The article confirms the AI system's deployment and operational use, not just development or potential use, and the harm linked to military targeting is realized or imminent. Therefore, this event meets the criteria for an AI Incident as the AI system's use directly leads to harm (injury or harm to persons and communities) in a conflict setting.
Thumbnail Image

Дрон SAKER SCOUT зі штучним інтелектом допустили до експлуатації у ЗСУ

2023-09-04
novyny.online.ua
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the SAKER SCOUT drone's AI software) in an operational military setting. The AI system's autonomous reconnaissance and targeting capabilities directly support military actions against an adversary, which can lead to harm (injury or death) in the context of armed conflict. Although the article does not describe a specific incident of harm occurring, the deployment of AI-enabled combat drones inherently carries a plausible risk of harm to persons and communities. However, since the article describes the approval and operational use rather than a specific harmful event, it is best classified as an AI Hazard reflecting the plausible future harm from the AI system's use in combat.
Thumbnail Image

Міноборони допустило до експлуатації у ЗСУ дрон зі штучним інтелектом

2023-09-04
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The event describes the deployment of an AI-enabled military drone system designed to autonomously identify enemy targets and support combat operations. The AI system's use in a military context implies a direct role in conflict, where its operation could lead to injury or harm to persons (harm category a). Although no specific harm is reported yet, the AI system's use in warfare plausibly leads to harm, making this an AI Hazard. There is no indication that harm has already occurred due to malfunction or misuse, so it does not qualify as an AI Incident. The event is not merely informational or unrelated, as it concerns the operational approval of an AI system with clear potential for harm.
Thumbnail Image

Міноборони дозволило використання дронів Saker Scout зі штучним інтелектом

2023-09-04
Інформаційне агентство Українські Національні Новини (УНН). Всі онлайн новини дня в Україні за сьогодні - найсвіжіші, останні, головні.
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system integrated into military drones used for autonomous reconnaissance and targeting. Although no harm or incident is reported, the use of AI in armed drones inherently carries plausible risks of injury, human rights violations, and escalation of conflict. The event concerns the approval and deployment of such AI systems, which could plausibly lead to AI Incidents involving harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Міноборони допустило до експлуатації в ЗСУ дрон на основі ШІ

2023-09-04
espreso.tv
Why's our monitor labelling this an incident or hazard?
The described drone system clearly involves AI through autonomous recognition, targeting, and coordination functions. The use of AI in military drones capable of reconnaissance and kamikaze attacks inherently carries a credible risk of harm, including injury or death and disruption of critical infrastructure or military operations. Although the article does not report actual harm or incidents caused by the system, the deployment and operational use of such AI-enabled weaponized drones plausibly leads to AI Incidents in the future. Hence, this event is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's use in military conflict.
Thumbnail Image

​ЗСУ взяли на озброєння український дрон зі штучним інтелектом

2023-09-04
LB.ua
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI algorithms for autonomous detection and targeting in a military drone system. The system is actively used by the Ukrainian military in combat operations, which inherently involve harm to persons and communities (harm category a and d). The AI system's use directly leads to harm in the context of armed conflict. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or future harm but confirms operational deployment and use causing harm.
Thumbnail Image

У Міноборони показали українські БпЛА зі штучним інтелектом SAKER SCOUT. Їх вже допустили до експлуатації

2023-09-04
ZN.UA
Why's our monitor labelling this an incident or hazard?
The drones use AI to autonomously detect and identify enemy military equipment, reducing human error and enabling more effective targeting. This AI-enabled capability directly contributes to military actions that cause harm to enemy personnel and equipment, fulfilling the criteria for harm to persons or groups. The event involves the use of an AI system in an operational military context with realized harm potential, thus it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Міноборони допустило до експлуатації в ЗСУ новий дрон зі штучним інтелектом

2023-09-04
ZAXID.NET
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the drone with AI-based autonomous recognition and targeting). The article describes the approval and deployment of this system but does not mention any actual harm or incidents resulting from its use. However, given the military context and the autonomous capabilities of the drone, there is a credible potential for future harm (e.g., injury, escalation, or unintended consequences). Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ЗСУ будуть використовувати дрони зі штучним інтелектом

2023-09-04
ms.detector.media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into military drones used for autonomous reconnaissance and targeting. While no harm is reported as having occurred yet, the deployment of AI-enabled armed drones in conflict zones plausibly leads to harm such as injury or death and harm to communities. The AI system's role in autonomous target recognition and coordination is central to the system's function, indicating a credible risk of harm. Since the article does not describe any actual harm or incident but focuses on the approval and deployment of the AI system, the event is best classified as an AI Hazard.
Thumbnail Image

Міноборони України допустило використання на фронті дрону зі штучним інтелектом

2023-09-04
novostidnua
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI algorithms for autonomous target recognition and information transmission in a military context. Although no direct harm or incident is reported, the use of such AI-enabled drones in active combat plausibly could lead to injury or harm to persons, qualifying as a credible future risk. The event does not describe an actual incident or harm but highlights the deployment and use of an AI system with significant potential for harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ЗСУ почнуть використовувати дрони зі штучним інтелектом

2023-09-04
ipress.ua
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the SAKER SCOUT drone with AI algorithms) in active military operations. The AI's autonomous recognition and targeting capabilities directly support combat actions, which inherently carry risks of harm to persons and property. Although no specific harm is reported yet, the deployment of AI-enabled military drones capable of autonomous reconnaissance and coordination with kamikaze drones plausibly leads to harm in armed conflict. Therefore, this constitutes an AI Hazard due to the credible risk of harm from the AI system's use in warfare.
Thumbnail Image

ЗСУ почали використовувати український дрон зі штучним інтелектом SAKER SCOUT. ФОТО

2023-09-04
Цензор.НЕТ
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and operational use of an AI-powered military drone system that autonomously identifies enemy targets and relays coordinates for attack. This AI system is actively used in warfare, which inherently involves harm to persons and property. The AI's role in target recognition and decision support directly contributes to these harms, fulfilling the criteria for an AI Incident. The description confirms the AI system's use (not just development or potential use) and its direct link to harm in a conflict context.