Ukraine Tests and Deploys AI-Enabled Combat Robot 'Lyut' on Battlefield

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ukraine has developed and begun field-testing the AI-enabled combat robot 'Lyut', which is remotely operated, armed, and equipped with advanced sensors. The robot is already being used experimentally on the front lines, raising potential risks of harm due to its autonomous and weaponized capabilities, though no incidents have been reported yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The combat robot described is remotely operated and performs complex tasks such as assault and defense, implying the use of AI systems for navigation, targeting, or situational awareness. The development and testing of such a weaponized AI system pose a plausible risk of harm to persons and communities if deployed in conflict, qualifying it as an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article focuses on the testing and development phase, not on realized harm or societal/governance responses, so it is not Complementary Information. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from the use of AI-enabled combat robots.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital securityMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Є броня та танковий кулемет. В Україні розпочалися тести бойового робота "Лють"

2023-09-30
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The combat robot described is remotely operated and performs complex tasks such as assault and defense, implying the use of AI systems for navigation, targeting, or situational awareness. The development and testing of such a weaponized AI system pose a plausible risk of harm to persons and communities if deployed in conflict, qualifying it as an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article focuses on the testing and development phase, not on realized harm or societal/governance responses, so it is not Complementary Information. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from the use of AI-enabled combat robots.
Thumbnail Image

В Україні випробовують бойового робота Лють

2023-09-30
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
The combat robot described is an AI system or at least a remotely operated robotic system with advanced capabilities for battlefield tasks. Its development and use in military operations pose a plausible risk of harm, including injury or death to persons, disruption of military operations, and broader conflict-related harms. Although the article does not report a specific harm event yet, the deployment and testing of such armed robots inherently carry credible risks of causing harm. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI-enabled combat robots.
Thumbnail Image

Колісні роботи-кулеметники підсилять наступ - Auto24

2023-10-02
auto.24tv.ua
Why's our monitor labelling this an incident or hazard?
The robot described is an AI-enabled weapon system (an armed robot with remote control and autonomous features) whose development and deployment in military contexts inherently carry risks of harm to persons and communities. Although no incident of harm is reported, the existence and planned use of such a system plausibly could lead to injury or death in combat, qualifying it as an AI Hazard. There is no indication of realized harm or malfunction causing harm, so it is not an AI Incident. The article is not merely complementary information since it focuses on the robot's capabilities and potential battlefield role, not on responses or governance.
Thumbnail Image

Глава Мінцифри показав, як виглядає український бойовий робот "Лють".

2023-09-30
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The combat robot "Lyut" is described as remotely controlled and equipped with advanced features like 360° camera and armor, implying AI or autonomous system involvement. The article reports successful field tests but does not mention any actual harm caused. However, the nature of the system as a weaponized robot means it could plausibly lead to harm in future use. Therefore, this event fits the definition of an AI Hazard, as it involves the development and use of an AI system that could plausibly lead to an AI Incident (harm). There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the robot's capabilities and testing, not on policy or general AI news.
Thumbnail Image

Вже є передсерійне виробництво та зразки, які використовуються на фронті: Федоров про бойового робота "Лють"

2023-10-03
Інформаційне агентство Українські Національні Новини (УНН). Всі онлайн новини дня в Україні за сьогодні - найсвіжіші, останні, головні.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a combat robot likely equipped with autonomous or semi-autonomous capabilities) that is currently being used in a military context on the front lines. However, the article does not report any actual harm caused by the robot, nor does it describe any incident or malfunction leading to injury, property damage, or rights violations. Instead, it focuses on the development, testing, and potential future deployment of the system. Given the nature of combat robots and their potential for harm, the event plausibly could lead to harm in the future, but no harm has yet occurred or been reported. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Український бойовий робот "Лють" успішно пройшов польові випробування

2023-09-30
LB.ua
Why's our monitor labelling this an incident or hazard?
The robot described is an AI system because it is remotely controlled and likely uses AI for navigation, targeting, and operational autonomy. Its use in combat roles directly relates to potential injury or harm to persons and communities, as well as possible violations of human rights. Since the article reports successful field tests but does not mention any actual harm or incidents caused by the robot, the event does not meet the criteria for an AI Incident. However, the development and preparation for scaling production of such a combat robot clearly pose a plausible risk of future harm, fitting the definition of an AI Hazard. The article does not focus on responses, governance, or updates to previous incidents, so it is not Complementary Information. It is not unrelated because it involves an AI system with potential for harm.
Thumbnail Image

В Україні розпочалися тести бойового робота ''Лють'': чим він допоможе ЗСУ. Фото

2023-09-30
WAR OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The combat robot 'Lyut' is an AI system as it is remotely operated and likely uses AI for navigation, targeting, and operational decisions. Its development and testing in a military context imply a plausible risk of harm (injury or death) in combat situations. Since no actual harm or incident has been reported yet, but the system's use could plausibly lead to harm, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the testing and potential benefits, not on any realized harm or incident.
Thumbnail Image

Віцепрем'єр Федоров оприлюднив світлини першого українського бойового робота "Лють". ФОТО

2023-09-30
Цензор.НЕТ
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI-enabled combat robot with remote control and battlefield capabilities, which qualifies as an AI system. While no harm has been reported yet, the robot's intended use in combat plausibly could lead to injury or death, fitting the definition of an AI Hazard. There is no indication of realized harm or malfunction causing harm, so it is not an AI Incident. The focus is on the robot's development and testing, not on governance or responses, so it is not Complementary Information. Hence, the event is classified as an AI Hazard.