Deployment of AI-Powered Humanoid Soldier Robots in Ukraine

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The American company Foundation delivered two AI-enabled humanoid soldier robots, Phantom MK-1, to Ukraine for frontline testing in combat and reconnaissance roles. While not yet used as autonomous combat units, their deployment in active warfare raises significant risks of harm and ethical concerns regarding AI use in military operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions a humanoid robot developed for defense, capable of wielding weapons and assisting in breaching operations, which implies AI system involvement in autonomous or semi-autonomous military functions. Although the robot is currently in testing and no harm has been reported, the nature of its intended use in armed conflict plausibly leads to serious harms such as injury or violations of rights. The development and testing of such AI-enabled military robots constitute a credible risk of future AI incidents, qualifying this as an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Humanoid Soldiers Tested In Ukraine; Founder Eyes Contract To Patrol US Border

2026-03-12
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems intended for military use, including combat and border patrol, which inherently involve risks of injury, death, and human rights violations. The article states these robots are being tested in Ukraine and prepared for deployment, indicating active use or imminent use in conflict zones. This meets the criteria for an AI Incident because the AI system's use directly or indirectly leads to harm (injury, violation of rights) in real-world scenarios. The involvement is in the use phase, with clear links to potential or actual harm. The article does not merely discuss potential future risks but reports on ongoing testing and preparation for deployment, which is sufficient to classify as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rise of the AI Soldiers

2026-03-10
TIME
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a humanoid robot developed for defense, capable of wielding weapons and assisting in breaching operations, which implies AI system involvement in autonomous or semi-autonomous military functions. Although the robot is currently in testing and no harm has been reported, the nature of its intended use in armed conflict plausibly leads to serious harms such as injury or violations of rights. The development and testing of such AI-enabled military robots constitute a credible risk of future AI incidents, qualifying this as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Robot Soldiers Hit the Battlefield in Ukraine

2026-03-13
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of humanoid robots and autonomous drones used in combat, which are AI systems by definition due to their autonomous decision-making and natural language command processing. The use of these systems in active war zones directly leads to harm (injury or death) and raises legal and ethical issues about accountability for war crimes, fulfilling the criteria for an AI Incident. The article describes actual deployment and use, not just potential risks, so it is not merely an AI Hazard. It also does not focus on responses or updates to prior incidents, so it is not Complementary Information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Ukraine receives humanoid robots for testing in combat conditions

2026-03-13
Ukrinform-EN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots equipped for reconnaissance and potential combat roles, which are AI systems. The use of such robots in war zones carries a credible risk of causing injury, violations of human rights, and other harms. Since the robots are currently in testing and no actual harm is reported yet, but their deployment in combat is planned and plausible, this fits the definition of an AI Hazard. The event does not describe realized harm or incidents but highlights a credible future risk associated with AI-enabled autonomous weapons systems.
Thumbnail Image

AI Soldiers are seemingly on the horizon

2026-03-12
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 is an AI system as it is a humanoid robot designed for autonomous or semi-autonomous operation in defense, including wielding weapons. The event concerns its development and testing, with no current incident of harm reported. However, the nature of the system and its intended use in armed conflict plausibly could lead to serious harms such as injury or death, violations of human rights, and escalation of conflict. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Humanoid Soldiers Tested In Ukraine; Founder Eyes Contract To Patrol US Border

2026-03-12
Signs Of The TImes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled humanoid robots being developed and tested for military and border patrol use, which involves autonomous or semi-autonomous decision-making in complex, high-risk environments. The deployment in active war zones and potential use in border security imply a credible risk of harm to people and communities. Although no specific harm has yet been reported, the nature of the AI system's intended use and the context of ongoing conflicts make it plausible that these systems could lead to AI incidents. Since harm is not yet realized but plausible, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Humanoid Soldiers Put To The Test In Ukraine

2026-03-13
The People's Voice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots designed for military use, implying AI systems with autonomous or semi-autonomous capabilities. The robots are being prepared for combat and patrol functions, which inherently carry risks of injury or harm. Although no actual harm or incident is reported yet, the potential for these AI systems to cause harm in warfare or border security contexts is credible and significant. Hence, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Humanoid Soldiers Put To The Test In Ukraine

2026-03-14
SGT Report
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems intended for military and patrol use, which inherently carry risks of harm if deployed in combat or security operations. The article does not report any actual harm or incidents caused by these robots but highlights their preparation for deployment in potentially dangerous scenarios. Given the nature of their intended use and the plausible risks involved, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. There is no indication that harm has already occurred, nor is the article focused on responses or updates to prior incidents.
Thumbnail Image

Travis Kalanick regresa a la robótica con Atoms y apunta a minería, alimentos y transporte

2026-03-13
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, as it discusses robotics and autonomous vehicles for industrial and mining applications, which inherently require AI for autonomous operation. However, no actual harm, malfunction, or misuse is reported. The focus is on the launch of a company and its strategic plans, which could plausibly lead to AI-related hazards in the future, especially given the sectors involved. Since no harm has yet occurred, and the article does not report any incident, the classification as an AI Hazard is appropriate.
Thumbnail Image

Soldados humanoides probados en Ucrania; Fundador busca contrato para patrullar la frontera de EE. UU.

2026-03-13
SOTT.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots developed by Foundation Robotics that are being tested in Ukraine and prepared for military use, including combat scenarios and border patrol. These robots are AI systems designed to operate in complex, high-risk environments alongside human soldiers. Their deployment in active war zones and potential use in border security can directly lead to injury, harm to people, and violations of human rights, fulfilling the criteria for an AI Incident. The article also references ongoing conflicts where such technology is being introduced, indicating realized or imminent harm rather than just potential risk. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ucrania prueba robots humanoides Phantom MK-1 en el frente de guerra

2026-03-13
NOTICIAS - LA JORNADA
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems used in military operations, involving autonomous or semi-autonomous functions such as reconnaissance and logistics in combat zones. Although no direct harm or incident is reported yet, the deployment of such AI-enabled humanoid robots in warfare presents credible risks of injury, escalation, or other harms. The article focuses on testing and potential future use rather than an actual incident causing harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Robots en la guerra de Ucrania: el primer despliegue humanoide

2026-03-15
notiulti.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with autonomous capabilities) actively deployed in a war zone, performing reconnaissance and armed combat roles. This directly relates to the use of AI systems leading to harm (injury or death in warfare), fulfilling the criteria for an AI Incident. The article reports actual deployment and use, not just potential or hypothetical risks, so it is not merely an AI Hazard. It is not complementary information since the main focus is on the deployment and use of AI robots in combat, which is a direct cause of harm. Therefore, the classification is AI Incident.
Thumbnail Image

Ucrania comienza a probar el uso de robots humanoides en zona de combate: el proyecto estadounidense que cambiaría el frente

2026-03-15
LaSexta
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with advanced sensors and autonomous capabilities) being deployed in a high-risk combat environment. Although the article does not report any actual harm caused by these robots yet, their intended use in mine detection and potential armed combat roles presents a credible risk of injury or harm. The development and use of such AI-enabled military robots in active conflict zones plausibly could lead to AI Incidents in the future. Since no harm has yet occurred, the classification as an AI Hazard is appropriate.
Thumbnail Image

Ucrania ya está usando robots humanoides en el frente que pueden portar armas: mide 1,8 m, pesa 80 kg y corre a 6 km/h

2026-03-16
El Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (humanoid robots with AI capable of using weapons) deployed in a war zone. While no specific harm has been reported as having occurred, the deployment of armed AI humanoid robots in combat plausibly could lead to injury or death, disruption, and other harms. The article highlights the potential and ongoing use of these AI systems in high-risk military environments, which aligns with the definition of an AI Hazard. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the deployment and potential risks of AI systems in warfare.
Thumbnail Image

Una empresa ya está enviando robots humanoides a Ucrania, y los expertos estallan: "Es moralmente repugnante"

2026-03-16
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots capable of carrying and potentially using weapons in combat, implying AI systems with autonomous or semi-autonomous functions. The deployment in a war zone and the concerns expressed by the UN Secretary-General about machines that can kill without human control highlight the credible risk of harm to human life. Since no actual harm event is described but the potential for lethal harm is clear and imminent, this fits the definition of an AI Hazard rather than an AI Incident. The ethical and regulatory concerns further support the classification as a hazard due to plausible future harm.
Thumbnail Image

La columna de Nieves Díaz | La Inteligencia Artificial en el campo de batalla - lavozdelsur.es

2026-03-17
lavozdelsur.es
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems integrated into military robotics and autonomous weapons (e.g., kamikaze drones, autonomous combat robots) that are currently used or being developed for warfare. It discusses the plausible risks and ethical concerns of these AI systems causing harm, including civilian casualties and uncontrollable autonomous lethal actions. Since the article does not describe a specific realized harm event but focuses on the credible potential for harm and the urgent need for regulation, this fits the definition of an AI Hazard rather than an AI Incident. The presence of AI systems is clear, the use is in military conflict, and the plausible future harm is significant and credible, meeting the criteria for an AI Hazard.
Thumbnail Image

Ucraina testează un robot care să fie trimis pe front. Phantom este "o extensie firească a sistemelor autonome existente"

2026-03-14
Digi24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (humanoid robots with AI capabilities for military operations). The article discusses the development and testing phases and the potential use of these AI systems in combat, which could plausibly lead to harms such as injury or death, escalation of conflict, and misuse (e.g., hacking and hostile takeover). Since no actual harm is reported yet but the risks are credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses the broader implications and risks, but the primary focus is on the plausible future harms from these AI-enabled military robots.
Thumbnail Image

Roboți umanoizi au sosit în Ucraina pentru a fi testați în lupte

2026-03-13
Libertatea
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems designed for military combat roles, including using weapons and performing reconnaissance. Their deployment in Ukraine's war zone means the AI systems are directly involved in situations where injury or harm to people is likely or occurring. The article discusses the robots' capabilities and intended use in combat, which inherently involves harm to persons, fulfilling the criteria for an AI Incident. The concerns about moral and legal responsibility further underscore the significance of the harm caused or enabled by these AI systems. Therefore, this event is classified as an AI Incident.
Thumbnail Image

VIDEO // Ucraina va testa roboți-soldați umanoizi Phantom MK-1

2026-03-13
Moldpress
Why's our monitor labelling this an incident or hazard?
The event involves AI systems embedded in humanoid military robots intended for use in war zones. Although the article does not report any realized harm or incidents caused by these robots, it clearly outlines the potential for significant harm due to their autonomous capabilities, operational risks, and security vulnerabilities. The mere testing and deployment of such AI-enabled weaponized robots in conflict zones constitutes a plausible risk of AI-related harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. There is no indication of actual harm yet, so it is not an Incident, and the focus is on potential future harm rather than a response or update, so it is not Complementary Information.
Thumbnail Image

O companie americană a trimis roboți în Ucraina pentru teste

2026-03-13
B1TV.ro
Why's our monitor labelling this an incident or hazard?
The robots described are AI systems because they perform autonomous tasks such as reconnaissance and explosive placement in complex, dangerous environments, implying real-time decision-making and adaptiveness. Although the article does not report actual harm caused by these robots, their deployment in active conflict zones plausibly leads to injury or death and ethical violations, fulfilling the criteria for an AI Hazard. The mention of ethical concerns and the potential for easier initiation of war further supports the classification as a hazard rather than an incident, as harm is potential but not yet realized.
Thumbnail Image

Ucraina testează un robot umanoid care urmează să fie trimis pe front. Phantom este "o extensie firească a sistemelor autonome existente" - Aktual24

2026-03-14
Aktual24
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems designed for autonomous or semi-autonomous military operations. The article discusses their development, testing, and potential deployment in conflict zones, as well as the risks of hacking and operational failures that could lead to harm. Since no actual harm or incident has been reported, but credible risks are clearly outlined, the event fits the definition of an AI Hazard. It is not Complementary Information because the focus is on the potential for harm rather than updates on past incidents or governance responses. It is not unrelated because the AI system and its risks are central to the article.
Thumbnail Image

Roboții-soldați umanoizi ajung pe front în Ucraina. Americanii testează dacă pot lupta în război

2026-03-15
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with autonomous capabilities) being developed and tested for military combat roles. While no specific harm has yet occurred, the deployment and use of such AI-enabled weaponized robots in conflict zones plausibly could lead to significant harms including injury or death to people, violations of human rights, and disruption of critical infrastructure. Therefore, this situation constitutes an AI Hazard because it plausibly could lead to an AI Incident involving serious harm in the future. The article does not report an actual incident of harm yet, but the credible risk is clear and significant.
Thumbnail Image

Company Testing Humanoid Robot Soldiers on Frontlines of Ukraine

2026-03-14
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with autonomous capabilities) actively deployed in a warzone, performing reconnaissance and potentially combat tasks. The AI system's use directly leads to potential and actual harm to human life and raises significant ethical and legal concerns. The article reports actual deployment and use, not just potential or hypothetical risks, thus qualifying as an AI Incident rather than a hazard or complementary information. The harm category includes injury or harm to persons in conflict and violations of human rights due to autonomous weapon use. Therefore, the classification is AI Incident.
Thumbnail Image

Will humanoid robots fight future wars? A startup has already sent them to Ukraine

2026-03-15
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots equipped with AI capabilities being sent to an active war zone for reconnaissance and logistics, with future potential to operate weapons. While no direct harm or incident is reported yet, the deployment of AI systems in combat roles inherently carries credible risks of injury or death, qualifying as plausible future harm. The AI system's involvement is in its use in military operations, which could lead to significant harms. Since no actual harm has been reported, this is best classified as an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it focuses on the deployment and potential battlefield use of AI-enabled humanoid robots with significant risk implications.
Thumbnail Image

Company Testing Humanoid Robot Soldiers on Frontlines of Ukraine

2026-03-15
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots deployed to frontline combat roles, equipped with weapons and capable of autonomous or semi-autonomous operation. This clearly involves AI systems used in a context where harm to persons and communities is highly plausible. While no specific harm has yet been reported, the deployment of armed AI robots in war zones inherently carries a credible risk of causing injury, death, and violations of human rights. The event thus fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving serious harm. It is not an AI Incident because no actual harm has been reported yet, nor is it Complementary Information or Unrelated, as the focus is on the deployment and potential risks of AI systems in warfare.
Thumbnail Image

Humanoid war robots 'game-changing' or dangerous on Ukraine frontlines

2026-03-15
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The humanoid war robots described are AI systems involved in military operations, with the potential to use weapons and operate in combat environments. Although no specific harm has been reported yet, their deployment in an active war zone plausibly leads to injury or death and other serious harms. The article highlights the experimental and unstable nature of these systems, indicating risks inherent in their use. Since the harm is potential and not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Rise of the AI soldiers

2026-03-17
The Independent Uganda:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (humanoid robots like Phantom and autonomous drones) being used in active combat, including autonomous firing and target elimination, which directly causes harm to human soldiers and mercenaries. This meets the definition of an AI Incident as the AI systems' use has directly led to injury and death (harm to persons). The article also discusses the broader implications and risks but the primary focus is on the ongoing use and harm caused by these AI systems in warfare, not just potential future harm or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Humanoid Soldier Robots Are Now Being Tested On The Battlefield In Ukraine

2026-03-17
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled humanoid robots being tested in a real battlefield environment, performing autonomous navigation and reconnaissance tasks. The robots' use in military operations inherently carries risks of injury or harm to people and potential violations of human rights. Although no direct harm is reported yet, the deployment in an active conflict zone and the robots' capabilities make it plausible that harm could occur. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Україна отримала від США "бойових" гуманоїдних роботів, - ЗМІ

2026-03-13
unian
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems used in military operations, which directly relate to potential harm in warfare (injury or harm to persons, harm to communities). The article states these robots have been sent to Ukraine and are being tested in combat conditions, implying active use of AI systems in a conflict with inherent risks. The mention of AI imperfections and risks to allies and enemies further supports the presence of direct or indirect harm potential. Since the robots are already deployed and used, this is not merely a potential hazard but an incident involving AI systems contributing to harm in a real-world conflict.
Thumbnail Image

Україна отримала на випробування гуманоїдних бойових роботів Phantom MK-1

2026-03-13
ФОКУС
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems designed for combat roles, including reconnaissance and armed engagement. Their deployment in Ukraine's active war zone means they are being used in ways that can directly cause injury or death, fulfilling the harm criteria (a) injury or harm to persons. The article explicitly states their use in combat and the potential to replace human soldiers, indicating direct involvement of AI in causing harm. This is not merely a potential risk but an ongoing use in warfare, which constitutes an AI Incident rather than a hazard or complementary information. The article also references the use of AI for training autonomous drones, reinforcing the military AI context. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Війна роботів вже близько: на фронті в Україні тестуватимуть солдатів-гуманоїдів

2026-03-13
ZN.UA
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the humanoid robots used as soldiers. Their use in active conflict zones and the described risks (malfunction, hacking, unpredictable AI behavior) create a credible risk of harm to people and military operations. However, the article does not report any actual harm or incidents caused by these robots yet, only potential risks and ongoing testing. Therefore, the event is best classified as an AI Hazard, reflecting plausible future harm from the development and use of these AI-enabled humanoid soldiers.
Thumbnail Image

Україна випробовує гуманоїдних роботів Phantom MK-1

2026-03-13
ZAXID.NET
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems used in military operations, including reconnaissance and combat support. Their deployment in active war zones and use in frontline defense indicates direct involvement of AI systems in situations with high risk of injury or harm to persons and communities. The article mentions risks such as hacking and loss of confidential information, which are relevant to the AI system's malfunction or misuse. Since the robots are actively used and have operational impact on the battlefield, this constitutes an AI Incident due to direct or indirect harm linked to AI system use in warfare.
Thumbnail Image

Україна отримала для випробування роботів-гуманоїдів

2026-03-13
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems with autonomous capabilities used in military contexts. Their deployment and testing in combat zones directly relate to the use of AI systems in ways that could lead to harm, including injury or death in warfare, and raise concerns about military escalation and ethical issues. Although no specific harm has yet been reported from their use, the article clearly indicates ongoing testing and potential future use in combat scenarios, which plausibly could lead to AI incidents involving injury or harm to people. Therefore, this event qualifies as an AI Hazard due to the credible risk of harm from the use of these AI-enabled humanoid soldier robots in warfare.
Thumbnail Image

Україні передали на випробування гуманоїдних роботів Phantom MK-1

2026-03-13
InternetUA
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems explicitly described as humanoid soldiers with autonomous capabilities, deployed in an active war zone where they support reconnaissance and potentially combat operations. The article details direct involvement of these AI systems in a context where harm to human life is occurring, fulfilling the criteria for an AI Incident. The discussion of ethical and legal challenges, risks of malfunction, and autonomous lethal action further supports this classification. Although some risks are potential, the robots are already in use in a lethal environment, making the harm direct or indirect and realized. Therefore, this event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Україна отримала для випробування на фронті гуманоїдних роботів Phantom MK-1: для чого їх використають

2026-03-13
Прямий
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems capable of autonomous or semi-autonomous military operations, including reconnaissance and potentially lethal force. The article does not report any realized harm or incident caused by these robots yet but highlights significant risks and ethical concerns about their use in warfare. The presence of these robots on the front lines and their intended use in combat create a credible risk of injury, violation of rights, or other harms. Since harm is plausible but not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader implications and risks, but the main focus is on the potential for harm rather than a current incident or complementary information about responses or governance.
Thumbnail Image

Компанія Foundation у лютому передала Україні двох гуманоїдних роботів-солдатів

2026-03-13
Високий Замок
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as humanoid soldier robots with AI capabilities used in active military operations in Ukraine. Their deployment and use in combat and reconnaissance directly relate to harm to persons and communities, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future risks but reports actual deployment and use, indicating realized or ongoing harm. Hence, it is not a hazard or complementary information but an AI Incident.
Thumbnail Image

Американська компанія передала Україні двох гуманоїдних роботів-солдатів - Time

2026-03-13
Межа
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems designed for military use, involving autonomous or semi-autonomous operation with lethal capabilities. Although the article does not describe any realized harm or incident caused by these robots, their deployment in an active war zone and their potential to engage in combat without direct human control (beyond operator confirmation) create a credible risk of harm to people and communities. The ethical concerns about lowering barriers to conflict and responsibility further support the classification as a hazard. Since no actual harm has yet occurred, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

ЗСУ вже не просто кіборги: за нас почали воювати гуманоїдні роботи Phantom - подробиці приголомшують

2026-03-13
Ukrainianwall.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (humanoid robots with AI capabilities) being used in military operations. While no direct harm is reported yet, the article discusses plausible risks including AI errors and security vulnerabilities that could lead to harm. The deployment of AI-enabled combat robots in warfare inherently carries credible risks of injury, human rights violations, or other significant harms. Since harm is plausible but not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The article is not merely general AI news or complementary information, as it focuses on the potential risks and deployment of AI systems in a high-stakes context.
Thumbnail Image

Фантоми на фронті: в Україні тестують бойових роботів

2026-03-14
InternetUA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as humanoid combat robots with autonomous capabilities tested and used in an active war zone (Ukraine). The AI systems' use in military operations directly leads to harm to people and communities, fulfilling the criteria for an AI Incident. The article also discusses risks of malfunction or misuse (e.g., enemy hacking), which further supports the classification. The presence of AI in these robots is clear, and their deployment in combat situations inherently involves injury or harm to persons, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ЗСУ тестуватимуть, як воюють американські солдати-роботи Phantom MK-1

2026-03-15
espreso.tv
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous humanoid military robots designed for combat use. Although no direct harm or incident has occurred yet, the article discusses plausible future harms such as misuse, hacking, malfunction, and ethical concerns related to autonomous weapons. These risks align with the definition of an AI Hazard, as the development and potential deployment of these robots could plausibly lead to injury, violations of rights, or harm to communities. Since no realized harm is reported, it is not an AI Incident. The article is not merely complementary information because it focuses on the potential risks and implications of these AI systems rather than updates or responses to past incidents. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Україна тестує на фронті гуманоїдних роботів: як виглядають (ФОТО)

2026-03-15
Комментарии Украина
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled humanoid robots being tested in a war zone, with potential future weaponization. While no harm has yet occurred, the use of such AI systems in military operations plausibly could lead to injury, violation of rights, or other harms. The event is about ongoing testing and development, not a realized incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event and their potential risks are discussed.
Thumbnail Image

US flags China's humanoid robot surge. Why is Washington worried?

2026-03-18
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of humanoid robots that combine AI, sensors, and actuators operating in physical environments. The concerns raised relate to the potential for these systems to impact industrial supply chains and national security, including military uses, which could plausibly lead to harms such as disruption of critical infrastructure or dual-use military risks. However, no direct or indirect harm has yet occurred according to the article. The main focus is on warning and policy proposals to address these emerging risks. This fits the definition of an AI Hazard, as it describes circumstances where AI system development and use could plausibly lead to an AI Incident in the future.
Thumbnail Image

US sounds alarm over China's humanoid robots amid security concerns

2026-03-17
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article centers on the potential national security risks posed by Chinese humanoid robots and the call for government action to counter these risks. There is no description of realized harm or incidents caused by AI systems. The event is about the plausible future threat and strategic responses, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it involves AI systems and their implications.
Thumbnail Image

U.S. Firm Deploys Humanoid Robots in Ukraine for Field Testing - NaturalNews.com

2026-03-18
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots with AI capabilities deployed in an active warzone for military testing, including reconnaissance and logistics, with potential combat roles. This involves AI systems used in a context where harm to persons and violations of international law are plausible. While no specific incident of harm is reported, the deployment in a conflict zone and the potential for armed use create a credible risk of harm. The event does not describe a realized harm but a plausible future harm from the AI system's use, fitting the definition of an AI Hazard rather than an AI Incident. The article also discusses broader implications and concerns, but the primary focus is on the deployment and testing with potential for harm, not on a realized incident or a governance response, so it is not Complementary Information.
Thumbnail Image

RealSense unveils autonomous humanoid navigation at GTC 2026

2026-03-16
The Robot Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (autonomous humanoid navigation using AI perception and mapping technologies). However, the article does not describe any realized harm or incident caused by the AI system, nor does it indicate any plausible immediate risk of harm. Instead, it highlights technological progress and safety improvements in humanoid robot navigation. Therefore, the event is best classified as Complementary Information, as it provides context and updates on AI system development and safety without reporting an incident or hazard.
Thumbnail Image

Humanoid Military Robots Deployed to Ukraine for Battlefield Testing

2026-03-17
eWEEK
Why's our monitor labelling this an incident or hazard?
The humanoid robots are AI systems designed for military use, explicitly described as operating in dangerous environments and potentially carrying weapons in the future. Their deployment in an active war zone for reconnaissance and other military tasks involves the use of AI in contexts where harm to humans and disruption of infrastructure is a credible risk. The article does not report any realized harm or incidents caused by these robots yet, but the plausible future harm from their use in combat justifies classification as an AI Hazard. The mention of cybersecurity risks and AI imperfections further supports the potential for harm. Since no actual harm has been reported, it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the deployment and implications of AI military robots in a conflict zone.
Thumbnail Image

US warns of 'robot race' with China as humanoid tech rivalry intensifies

2026-03-18
AzerNews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (humanoid robots) and discusses their rapid development and potential national security risks, which could plausibly lead to harms such as disruption or security threats. However, there is no indication that any harm has yet occurred or that an incident has taken place. The main content is about warnings, policy discussions, and strategic considerations, which fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their implications.
Thumbnail Image

TI Accelerates The Next Generation Of Physical AI With NVIDIA - Manufacturing AUTOMATION

2026-03-16
Manufacturing AUTOMATION
Why's our monitor labelling this an incident or hazard?
The article discusses the development and integration of AI systems for humanoid robots, emphasizing safety and perception improvements. However, it does not report any realized harm, malfunction, or misuse of these AI systems. There is no indication of an incident or hazard occurring or imminent. The focus is on enabling safer deployment and advancing technology, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Ucrania ya prueba robots humanoides en el frente: pueden empuñar armas y sustituir a los soldados

2026-03-17
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (humanoid robots with AI for navigation and movement) used in a military context where harm to humans is a direct concern. The robots can carry and potentially use weapons, and their deployment on the frontline means their use is directly linked to risks of injury or death. Although the robots are currently controlled by human operators for lethal decisions, their autonomous movement and operational role in combat zones mean they are part of an AI system whose use has led or could lead to harm. Therefore, this qualifies as an AI Incident under the definition of harm to persons resulting from the use of AI systems in a critical and high-risk environment.
Thumbnail Image

Ucrania despliega los primeros robots humanoides de guerra de la historia

2026-03-16
El Confidencial
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems with autonomous navigation and decision-making capabilities under human supervision, deployed in active war zones performing reconnaissance and potentially combat roles. Their use directly impacts human safety and warfare outcomes, fulfilling the criteria for an AI Incident due to direct involvement in harm to persons. The article also highlights the moral and legal implications of AI decisions in lethal contexts, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Ucrania prueba en el frente los primeros robots humanoides para combate

2026-03-17
La Razón
Why's our monitor labelling this an incident or hazard?
The Phantom MK1 robots are AI systems designed for autonomous navigation, decision-making in movement, and operation in dangerous environments, including military combat. Their deployment on the front lines in Ukraine means the AI systems are actively used in a context where harm to human life is occurring or highly likely. The article explicitly states their use in reconnaissance and potentially weapons operation, which directly relates to injury or harm to persons (harm category a). The AI system's involvement is in its use, and the harm is either occurring or imminent given the combat environment. This meets the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential future risks or responses but reports actual deployment and use in a conflict zone, confirming realized harm potential.
Thumbnail Image

Una empresa envía robots humanoides soldado a Ucrania, pero se caen solos: la verdad detrás del Phantom MK-1

2026-03-17
La Razón
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems intended for military use, involving autonomous or semi-autonomous operation in combat zones. The article documents their malfunctioning (frequent falls, failure due to electrostatic discharge) during demonstrations and testing, which could plausibly lead to operational failures or harm if deployed in real combat. Although no actual harm or incident has been reported yet, the described technical fragility and the intended use in warfare create a credible risk of future harm or disruption. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its malfunction are central to the report.
Thumbnail Image

Ucrania pone a prueba por primera vez robots humanoides en el frente: ¿los soldados del futuro?

2026-03-17
Libertad Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with autonomous movement and reconnaissance functions) being used in a military conflict. Although currently under human control and limited in scope, their deployment in a warzone introduces a credible risk of future harm, such as injury or violation of human rights, if these systems malfunction or are used autonomously. Since no actual harm or incident has occurred yet, but plausible future harm exists, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Ucrania prueba robots humanoides 'Phantom MK-1' en el frente de guerra con capacidad armamentística

2026-03-17
Diario de Sevilla
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems as they use advanced AI for autonomous movement, obstacle avoidance, and tactical reconnaissance. They are deployed in an active war zone with armament capabilities, which directly implicates them in harm to persons and communities (harm category a and d). The article explicitly states their use in combat and reconnaissance missions, indicating realized harm or at least direct involvement in harm. Although lethal decisions require human authorization, the AI system's autonomous navigation and operational role contribute directly to military actions causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ucrania prueba robots humanoides con IA en la guerra contra Rusia: ¿qué son los Phantom MK-1 y para qué sirven?

2026-03-18
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 is an AI system designed for military use, involving autonomous or semi-autonomous operation in combat environments. Its deployment in Ukraine for testing under real war conditions, combined with reported malfunctions and the potential for autonomous use of lethal force in the future, creates a credible risk of harm to persons or communities. Since no actual harm or injury caused by the robot is reported yet, but the plausible future harm is significant, the event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader governance and ethical concerns, reinforcing the hazard classification.
Thumbnail Image

Humanoides en el frente: Ucrania despliega robots de combate y marca un hito en la guerra moderna

2026-03-17
ADN Radio 91.7 Chile
Why's our monitor labelling this an incident or hazard?
The Phantom MK1 robots are AI systems as they perform autonomous or semi-autonomous tasks such as reconnaissance and explosive disposal, involving complex decision-making and sensing. Their deployment in active combat zones means their use has directly led to or could lead to injury or harm to persons (soldiers, civilians) and harm to communities due to warfare. The article explicitly states their operational use in frontline combat, indicating realized use rather than hypothetical future risk. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing or enabling harm in a conflict setting.
Thumbnail Image

Phantom MK-1: así son los primeros robots humanoides desplegados en la guerra de Ucrania

2026-03-17
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI-enabled humanoid systems used in real combat conditions, controlled by humans but capable of complex tasks in hostile environments. While no direct or indirect harm has been reported, their deployment in warfare introduces credible risks of injury, escalation, or misuse in the future. The article emphasizes their experimental status and current non-autonomous operation, indicating no incident has occurred yet. Thus, the event fits the definition of an AI Hazard, as the use of such AI systems could plausibly lead to harm in the future.
Thumbnail Image

Ucrania ya está probando robots humanoides en el frente de guerra y no es un experimento más. Es la primera vez que máquinas con forma humana entran en combate real y cambia cómo entendemos la guerra moderna

2026-03-18
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems with autonomous capabilities used in real combat, which directly involves them in causing or mitigating harm in warfare. The article explicitly states their deployment in an active war zone, their use of weapons, and their role in replacing human soldiers in dangerous tasks. This meets the criteria for an AI Incident as the AI system's use has directly led to harm or risk of harm to persons and communities. The ethical concerns and operational risks further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

EE.UU. cambia las normas: envía a Ucrania dos robots de guerra humanoides de 130.000 euros para probarlos en el frente

2026-03-16
Vandal
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems involved in military operations, specifically reconnaissance and support in combat zones. Their deployment in Ukraine is a real event involving AI use, but the article does not mention any actual injury, death, or other harm caused by these robots yet. The potential for harm is credible and significant given their intended military application and future plans for autonomous weapon use under human control. Since no harm has materialized but plausible future harm exists, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems in a context with potential for harm.
Thumbnail Image

Ucrania despliega robots humanoides armados, así es el Phantom MK-1 - PasionMóvil

2026-03-19
PasionMovil
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems capable of autonomous navigation and operation in combat environments, equipped with lethal weapons. Their deployment in Ukraine's frontline, even in testing mode, directly involves AI in potentially lethal military actions, which can cause injury or death, fulfilling the harm criteria for an AI Incident. The human-in-the-loop design does not negate the AI system's role in the harm chain, as the AI controls movement and navigation autonomously and supports lethal operations. The article describes actual deployment and use, not just potential future harm, so this is not merely a hazard. The event is not complementary information or unrelated, as it reports a significant AI system use with direct implications for harm.
Thumbnail Image

"Les robots ne saignent pas": pour la première fois, des robots humanoïdes américains vont être déployés sur le front ukrainien

2026-03-17
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of humanoid robots and UGVs deployed in active combat roles, including armed drones and robots performing reconnaissance and potentially lethal tasks. These systems are directly involved in military operations that cause harm to human life and communities, fulfilling the criteria for an AI Incident. The deployment is not hypothetical or potential but already occurring, with documented missions lasting weeks on the frontline. The ethical concerns about autonomous lethal capabilities further emphasize the gravity of the harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Guerre en Ukraine : les premiers robots humanoïdes spécifiquement conçus pour la guerre, l'Ukraine a déployé pour la première fois les robots soldats Phantom MK-1 sur le front

2026-03-14
lindependant.fr
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems explicitly described as humanoid autonomous soldiers deployed in an active war zone, performing reconnaissance and combat roles. Their use in warfare inherently involves direct risk of injury or death to people, fulfilling the harm criteria. The article describes their deployment and operational use, not just potential or future risks, indicating realized harm or imminent harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Il inspire une terreur viscérale " : le robot de combat Phantom MK-1 envoyé en Ukraine pour des tests sur le front

2026-03-13
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 is an AI system (a humanoid combat robot with autonomous capabilities) deployed in an active war zone, carrying weapons and intended for combat missions. Its use directly involves the risk of injury or death to soldiers and civilians, fulfilling the harm criteria (a) injury or harm to persons or groups. The article reports actual deployment and testing on the front lines, not just theoretical or future potential, indicating realized harm or imminent risk. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ça y est, on voit des robots soldats humanoïdes arriver en Ukraine - Numerama

2026-03-16
Numerama.com
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems capable of manipulating firearms and operating in combat environments, which inherently involves risks of injury or death. Their deployment on the front lines in Ukraine means the AI systems are actively used in a context where harm is occurring or highly likely. The article explicitly describes their use in armed conflict, which meets the criteria for harm to persons and disruption of critical infrastructure (military operations). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Guerre en Ukraine : les premiers robots humanoïdes armés font leur apparition sur le front

2026-03-17
Planet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-enabled humanoid robots armed with assault rifles on the front lines in Ukraine, which directly involves AI systems in a context that can cause injury or harm to people (harm (a)) and has implications for military operations (harm (b)). The robots' deployment in active combat, even with human oversight on firing, means the AI systems are contributing factors in a conflict environment with potential for harm. The operational failures and ethical concerns further underscore the risks. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information, as harm is ongoing or imminent due to the AI system's use in warfare.
Thumbnail Image

Guerre en Ukraine : des robots de combat testés sur le champ de bataille - ZDNET

2026-03-17
ZDNet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid combat robots with autonomous or semi-autonomous capabilities) being used in an active war zone, which inherently carries risks of physical harm to people and disruption of critical infrastructure. Although the article does not report a specific incident of harm yet, the deployment of such AI systems in combat plausibly leads to AI incidents due to the high-risk environment and the potential for malfunction or misuse. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm, but no specific harm event is described as having occurred yet.
Thumbnail Image

L'impensable est arrivé : des robots humanoïdes combattent en Ukraine !

2026-03-16
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with AI for navigation, perception, and potentially weapon manipulation) actively deployed in a war zone, performing tasks that directly influence physical environments and human safety. The article highlights real use in combat, with risks of malfunction and ethical concerns about lethal autonomous capabilities. The AI system's use is directly linked to potential or actual harm to human life and the conduct of warfare, fitting the definition of an AI Incident. Although lethal decisions are currently human-controlled, the AI's role in movement, perception, and potential weapon handling, combined with the operational context, means the AI system's development and use have directly or indirectly led to significant harm or risk thereof. This surpasses a mere hazard or complementary information classification.
Thumbnail Image

Guerre en Ukraine : les robots humanoïdes soldats sont arrivés au front

2026-03-17
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (humanoid soldier robots with autonomous capabilities) in an active war zone, which directly relates to harm to people (soldiers and civilians) and raises significant ethical and legal concerns. The robots are actively deployed and tested in real combat conditions, indicating realized use rather than hypothetical risk. The article discusses the potential for harm, escalation, and loss of human control, all linked to the AI system's development and use. Hence, it meets the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

De Pikachu à la livraison de pizza: comment Niantic recycle vos déplacements dans "Pokémon Go" et 30 milliards de photos pour aider les robots à savoir exactement où vous livrer

2026-03-16
BFMTV
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and use of an AI system (world model for robot navigation) based on data from Pokémon Go players. There is no mention or implication of injury, rights violations, property damage, or other harms caused by this AI system. The narrative is about technological progress and potential benefits in robot delivery precision. Hence, it does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context and insight into AI applications and ecosystem developments without reporting any harm or credible risk of harm.
Thumbnail Image

Des robots autonomes bientôt sur les terrains de guerre ?

2026-03-14
Franceinfo
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems. Instead, it discusses the plausible future development and deployment of AI-enabled autonomous military robots, which could potentially lead to significant harms if deployed without sufficient safeguards. The mention of AI-generated fake videos is contextual and does not describe an incident of harm. Therefore, the event describes a credible potential risk (hazard) related to AI systems in military applications, but no actual harm or incident has occurred yet.
Thumbnail Image

En Chine, des " écoles de robots " forment des humanoïdes qui vous piqueront (peut-être) un jour votre job

2026-03-16
Sciencepost
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems—specifically humanoid robots with AI-based motor control trained through physical data collection. Although no actual harm has occurred yet, the article implies a plausible future risk of job displacement and societal disruption due to these robots potentially replacing human labor. This constitutes a credible potential harm linked to the AI systems' deployment. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to harm (economic and social) through the widespread use of AI-powered humanoid robots, but no direct harm is currently reported.
Thumbnail Image

Ukraine : des robots humanoïdes sont envoyés près du front pour des tests

2026-03-16
KultureGeek
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems used in a military conflict zone, currently in testing phases without reported harm. However, the article explicitly discusses the potential risks related to autonomy, errors, and hacking, which could plausibly lead to incidents involving injury, violations of rights, or other harms. Since no actual harm has been reported yet, but credible future risks exist, this event qualifies as an AI Hazard rather than an Incident. It is more than complementary information because it focuses on the deployment and associated risks, not just responses or ecosystem context.
Thumbnail Image

Guerre en Ukraine : il s'appelle le Phantom MK-1 et ce robot humanoïde pourrait bientôt aller combattre sur le front

2026-03-18
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 is an AI system as it is a humanoid robot capable of autonomous or semi-autonomous operations including reconnaissance and potentially offensive combat actions. The article states that these robots have been deployed for reconnaissance and are intended to be used in combat roles, which could directly lead to injury or death (harm to persons) and other war-related harms. However, the article does not report any actual harm caused by these robots yet, only their potential and planned use in combat. Thus, the event does not meet the threshold for an AI Incident but clearly represents an AI Hazard because the development and deployment of these AI-enabled combat robots could plausibly lead to significant harm in the future. The ethical concerns raised further support the classification as a hazard with serious implications.
Thumbnail Image

Chine ou Etats-Unis : qui va remporter la course effrénée aux robots humanoïdes dans l'industrie ?

2026-03-17
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and production race of humanoid robots, which are AI systems, but does not describe any actual harm or incident caused by these systems. The event is about the potential future impact of these robots as they become more widespread. Therefore, it fits the definition of an AI Hazard, as the development and deployment of humanoid robots could plausibly lead to AI incidents in the future, but no harm has yet occurred or been reported.
Thumbnail Image

Počinje era Terminatora: Humanoidni roboti Phantom MK-1 na frontu u Ukrajini

2026-03-19
kurir.rs
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 humanoid robots are AI systems used in military operations, performing tasks that involve significant risk to human life. Although the article does not mention any actual harm or malfunction caused by these robots, their deployment in active combat zones inherently carries a credible risk of causing injury or death, or other harms associated with warfare. The human-in-the-loop control mitigates but does not eliminate the risk. Since no harm has yet occurred or been reported, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the deployment and capabilities of the AI system rather than responses or updates to prior incidents. It is not Unrelated because the event clearly involves AI systems with potential for harm.
Thumbnail Image

Humanoidni vojnici-roboti stigli u Ukrajinu, počela testiranja tehnologije na bojnom polju

2026-03-17
Klix.ba
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems designed for military combat roles, including reconnaissance and handling weapons. Their deployment on the front lines in an active war zone means their use could plausibly lead to injury or death, disruption, and other harms associated with warfare. While no specific harm has been reported yet, the nature of their use and the context of armed conflict make it a credible AI hazard. The article focuses on the testing and deployment phase, indicating potential future harm rather than a realized incident. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Nova faza ratova - humanoidni roboti stižu na ukrajinsko bojište

2026-03-19
Vecernji.hr
Why's our monitor labelling this an incident or hazard?
The humanoid robots are AI systems designed for military use, and their deployment in an active conflict zone implies a plausible risk of causing harm. Although the article does not report any actual harm or incident caused by these robots yet, the potential for injury, death, or other serious consequences is credible and foreseeable. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Nova era ratovanja: Humanoidni roboti na prvim linijama bojišta

2026-03-17
Dnevnik.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots equipped with AI systems being deployed and tested on the front lines of an active war zone. The robots perform reconnaissance and other high-risk tasks, which involve AI decision-making and autonomous navigation. While no actual harm or malfunction is reported, the use of AI in lethal or dangerous military applications inherently carries a credible risk of injury, death, or other harms. The article also highlights plans for large-scale production and deployment, increasing the potential impact. Since no realized harm is described, but plausible future harm is evident, the event is best classified as an AI Hazard.
Thumbnail Image

Ukrajina postala poligon za testiranje novih vojnih tehnologija: Američki startap poslao robote od 80 kg

2026-03-19
Ubrzanje Telegraf
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (humanoid robots with AI-based navigation and control) being deployed in a real war zone for combat testing. Although no direct harm or incident is reported yet, the use of such AI-enabled military robots in active combat plausibly could lead to injury, death, or other significant harms. The article highlights the potential for these robots to perform dangerous military roles, which inherently carry risks. Since harm is plausible but not yet realized or documented, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the deployment and testing of AI military robots with potential for harm, not on responses or broader ecosystem context.
Thumbnail Image

俄乌战场现持枪机器人,"终结者"真来了?| 新京报专栏

2026-03-26
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes humanoid armed robots with autonomous capabilities (AI systems) actively deployed in a real warzone (Ukraine). These robots perform tasks that directly influence combat outcomes and soldier safety, thus directly contributing to harm (injury or harm to persons in warfare). The AI systems' use in lethal military applications and their impact on warfare dynamics meet the criteria for an AI Incident. The article also discusses the ethical and legal implications, reinforcing the significance of the harm caused. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

俄乌战场惊现持枪机器人:身高180cm、体重80kg、负重20kg

2026-03-25
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The robots are explicitly described as AI systems with autonomous decision-making capabilities in a military context, including lethal functions. While the article does not report actual harm occurring yet, the deployment of armed AI robots in warfare inherently carries a credible risk of causing injury, death, and violations of human rights. The planned large-scale production and use further increase the plausibility of future harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident, as harm is plausible but not yet realized.
Thumbnail Image

美国,正式进入机器人杀人时代

2026-03-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems as they perform complex tasks such as navigation, manipulation, and potentially autonomous or semi-autonomous operation in combat environments. Although currently controlled by humans, their deployment in active war zones with lethal capabilities means they are directly involved in causing harm or risk of harm to humans, fulfilling the criteria for an AI Incident. The article explicitly states these robots are deployed and used in combat scenarios, which involves direct or indirect harm to people. The discussion of future fully autonomous AI weapons further supports the classification but does not change the fact that the current deployment already constitutes an incident. Therefore, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

俄乌战场现持枪机器人 未来战士引发担忧

2026-03-25
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The presence of AI-powered armed robots actively used in a war zone constitutes an AI system whose use directly leads to potential harm, including injury or death, which fits the definition of an AI Incident. Although the article does not report specific harm caused yet, the deployment and operational use of autonomous armed robots in combat is a direct involvement of AI systems in causing or enabling harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美媒:俄乌战场现持枪机器人 未来战争新主角

2026-03-26
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems with autonomous decision-making and lethal capabilities deployed in an active warzone, directly involving AI in causing or enabling harm to people. The article details their use in combat roles, the potential for malfunction or hacking, and the ethical concerns about lethal autonomous weapons. This meets the criteria for an AI Incident because the AI system's use has directly led to harm or the risk thereof in a real-world conflict. The presence of autonomous lethal robots in warfare is a clear case of AI causing or enabling harm to persons and communities, not merely a potential hazard or complementary information.
Thumbnail Image

俄乌战场惊现持枪机器人 未来战争新趋势

2026-03-26
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are equipped with advanced AI systems that autonomously assess and act in a combat environment, including armed engagement capabilities. This clearly involves AI systems in a context where harm to persons and disruption of critical infrastructure is a direct consequence. The deployment of such AI-armed robots in an active war zone meets the criteria for an AI Incident because the AI system's use directly leads to potential or actual harm. The article's mention of current deployment and planned mass production further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

人形战斗机器人真能改写俄乌战局吗 实战化设计引关注

2026-03-26
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robot is an AI system explicitly described as deployed in active combat, performing reconnaissance and capable of weapon operation under human control. Its AI autonomously assesses the battlefield and plans movement, directly influencing military decisions and potentially causing harm. The deployment in an active war zone with weaponized AI systems meets the criteria for an AI Incident due to direct involvement in harm (injury, death, and ethical violations). The article also highlights concerns about the ethical and legal implications of such autonomous military AI, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美媒:俄乌战场现人形机器人 持枪机器人引发热议

2026-03-27
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Phantom MK-1 humanoid robot) that is actively used in a warzone, carrying weapons and making autonomous decisions based on AI. The use of such robots in combat directly relates to harm to persons and communities (harm category a and d). The article states that these robots are deployed and actively participating in combat roles, which means harm is occurring or highly likely. Therefore, this is an AI Incident rather than a hazard or complementary information. The presence of AI in autonomous decision-making and weapon use is explicit, and the harm is direct and materialized in the context of armed conflict.
Thumbnail Image

俄乌战场,"终结者"来了?

2026-03-26
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: humanoid armed robots with autonomous decision-making and operational capabilities. Their deployment in active combat roles directly impacts human safety and warfare conduct, fulfilling the criteria for harm to persons and communities. The article reports actual use on the battlefield, not just potential or theoretical risks, thus constituting an AI Incident. The harms include direct physical risk to humans, potential escalation of conflict, and violations of international law and ethics related to autonomous weapons. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

俄乌战场惊现持枪机器人:身高180cm、体重80公斤、负重20公斤,可AI评估战场并侦察射击

2026-03-25
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system integrated into humanoid robots with autonomous battlefield assessment and shooting capabilities. While no direct harm is reported yet, the deployment of armed AI robots in a conflict zone presents a credible risk of injury or death, fulfilling the criteria for an AI Hazard. The event does not describe an actual incident of harm caused by the AI system but highlights a plausible future risk. Hence, it is classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

AI士兵崛起?俄乌战场惊现持枪人形机器人

2026-03-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Phantom MK-1 humanoid robot with AI for battlefield decision-making) that is actively deployed in a war zone, carrying weapons and performing combat-related tasks. The AI's autonomous decision-making in a lethal context directly relates to harm to persons and potential violations of human rights and international law. The article states that these robots are already in use on the battlefield, indicating realized harm or at least direct involvement in harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【国际动态】AI 士兵崛起?俄乌战场惊现持枪人形机器人

2026-03-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered humanoid robots armed with weapons being tested and deployed in a war zone, with autonomous decision-making capabilities. This clearly involves AI systems in a context where harm to persons (soldiers and civilians) is highly plausible and likely. The use of AI in lethal autonomous weapons systems is a recognized source of significant harm and ethical concern. Since the robots are already deployed and operational in a conflict, the harm is not just potential but ongoing or imminent, qualifying this as an AI Incident under the framework.
Thumbnail Image

روبوتات قتالية مدعومة بالذكاء الاصطناعي تدخل ساحات المعارك

2026-03-27
قناة العربية
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, namely AI-powered combat robots and unmanned vehicles used in military contexts. Although it does not report any realized harm or incident, it highlights the plausible risks and challenges that could lead to significant harm, such as misuse, cyberattacks, or loss of human control in lethal decisions. This fits the definition of an AI Hazard, as the development and deployment of these AI systems could plausibly lead to AI Incidents involving harm to people, communities, or violation of rights. There is no indication of an actual incident or complementary information about responses or governance measures, so it is not an AI Incident or Complementary Information. It is not unrelated because the article centers on AI systems and their implications.
Thumbnail Image

الحرب والذكاء الاصطناعي.. تقنيات المستقبل بدأت على الأرض

2026-03-27
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems embedded in military robots and autonomous vehicles used in warfare, which qualifies as AI system involvement. Although no direct harm or incident is described, the text highlights significant plausible future harms, including risks of cyberattacks, misuse, and operational failures in complex environments. The development and testing of AI-enabled military robots with lethal or strategic capabilities inherently carry credible risks of harm to people and communities, fitting the definition of an AI Hazard. There is no indication of an actual incident or complementary information about responses or governance, so AI Hazard is the appropriate classification.
Thumbnail Image

الروبوتات العسكرية الذكية تدخل ساحات المعارك.. هل تغيرت قواعد الحروب؟ | التلفزيون العربي

2026-03-28
التلفزيون العربي
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into military robots that are currently being tested and used in combat environments. These systems perform complex autonomous functions such as environment sensing, navigation, and tactical analysis, fitting the definition of AI systems. Although no direct harm or incident is reported, the article discusses credible risks and challenges that could plausibly lead to harms such as injury, disruption, or escalation of conflict. The presence of AI in these military robots and the potential for future harm aligns with the definition of an AI Hazard. There is no indication of a realized harm or incident, so it cannot be classified as an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the plausible risks of AI military robots.
Thumbnail Image

روبوتات قتالية مدعومة بالذكاء الاصطناعي تدخل ساحات المعارك

2026-03-28
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as combat robots and unmanned vehicles using AI for complex tasks in military operations. Although no direct harm has yet occurred as these systems are still in testing and human operators retain lethal decision authority, the article clearly outlines plausible future harms from their deployment, including risks of misuse and cyberattacks. Therefore, this qualifies as an AI Hazard because the AI systems' development and intended use could plausibly lead to incidents involving injury, violation of rights, or harm to communities in warfare contexts.
Thumbnail Image

روبوتات مدعومة بالذكاء الاصطناعي تدخل ساحات القتال

2026-03-29
وكالة النبا
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems integrated into military robots and unmanned vehicles used in conflict zones, with AI analyzing data and suggesting decisions, though humans retain final authority. The mention of cybersecurity risks and misuse indicates plausible pathways to harm. No actual harm or incident is reported, but the development and testing of such AI-enabled systems with lethal or operational capabilities in warfare plausibly could lead to injury, violation of rights, or other harms. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Phantom MK-1: Τα ανθρωποειδή ρομπότ που θα δοκιμάσει η Ουκρανία στα πεδία των μαχών

2026-03-18
NewsIT
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 humanoid robots involve AI systems for autonomous navigation and task execution under human supervision. Their deployment in active combat zones for reconnaissance and potentially weapon handling introduces a credible risk of harm (injury, death, or other battlefield consequences). Since the article does not report any actual harm yet but focuses on testing and potential future use, it fits the definition of an AI Hazard rather than an Incident. The AI system's development and use could plausibly lead to significant harm in the future, meeting the criteria for an AI Hazard.
Thumbnail Image

Η Ουκρανία αναπτύσσει τα πρώτα πολεμικά ρομπότ στην ιστορία

2026-03-18
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The Phantom MK-1 robots are AI systems capable of autonomous movement and navigation, equipped with lethal weapons, and used in active combat. Their deployment in war directly leads to harm to people and communities, fulfilling the criteria for an AI Incident. The human-in-the-loop design does not negate the AI system's role in the harm, as the robots perform critical functions autonomously and are integral to military operations causing injury or death. The article explicitly states their use in warfare and the associated risks, confirming realized harm rather than potential harm.
Thumbnail Image

Ανδροειδή στρατιώτες στη πρώτη γραμμή του μετώπου στην Ουκρανία

2026-03-18
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots with AI for navigation and task execution) deployed in a war zone for combat-related tasks. The article does not mention any realized harm or malfunction but highlights the potential for these robots to perform dangerous roles that currently involve human soldiers. Given the context of active warfare and the nature of the AI system's intended use, there is a credible risk that their deployment could lead to injury, death, or other harms. Since no actual harm has been reported yet, the event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the deployment and potential implications rather than updates or responses to a prior incident. It is not Unrelated because the AI system and its military use are central to the event.
Thumbnail Image

Η Ουκρανία δοκιμάζει στο πεδίο της μάχης τα πρώτα ανθρωποειδή ρομπότ-στρατιώτες

2026-03-18
CNN.gr
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems with autonomous navigation and operational capabilities in a military context. Their deployment in an active war zone for reconnaissance and potentially armed operations involves the use of AI in a high-risk environment where harm to humans and communities is a credible risk. Since the article does not mention any realized harm or malfunction but focuses on the testing and potential future use, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it highlights the plausible future harm from AI-enabled military robots in combat.
Thumbnail Image

"Ο πόλεμος του μέλλοντος είναι εδώ" - Ανθρωποειδή ρομπότ δοκιμάζονται στα πεδία μάχης της Ουκρανίας

2026-03-18
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as humanoid military robots equipped with AI-based perception and capable of carrying weapons. Their deployment in active combat zones for testing implies a credible risk of causing injury or harm to people, disruption, or violations of rights, even if no harm has yet been reported. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the deployment and potential risks of these AI-enabled military robots.
Thumbnail Image

Phantom MK-1: Ανθρωποειδή ρομπότ θα δοκιμάσει η Ουκρανία στα πεδία των μαχών [vid, pic]

2026-03-18
OnAlert
Why's our monitor labelling this an incident or hazard?
The humanoid robots described are AI systems due to their autonomous navigation and task execution capabilities. Their deployment in active conflict zones for reconnaissance and potential weapon handling implies a credible risk of harm (injury, death, or human rights violations) if they malfunction or are misused. Since the article focuses on testing and potential future use without reporting actual harm, it fits the definition of an AI Hazard, as the AI systems could plausibly lead to an AI Incident in the future. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential battlefield use and associated risks, not on responses or updates to past incidents.
Thumbnail Image

Η Ουκρανία δοκιμάζει στο πεδίο της μάχης τα πρώτα ανθρωποειδή ρομπότ - στρατιώτες

2026-03-18
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as humanoid military robots with autonomous capabilities being tested in an active war zone. The article does not mention any realized harm or malfunction but highlights the potential for these AI systems to be used in dangerous military operations, which could plausibly lead to injury or harm to people. The human-in-the-loop design mitigates some risk but does not eliminate the plausible future harm. Hence, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Phantom MK-1: Τα ανθρωποειδή ρομπότ που "επιστρατεύει" η Ουκρανία - Η επανάσταση των μη επανδρωμένων στα πεδία των μαχών - GOVNews.gr

2026-03-18
GOVNews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions humanoid robots designed for reconnaissance and potentially armed combat roles, indicating AI system involvement. The use of such AI-enabled robots in warfare inherently carries risks of injury, death, and other harms. Since the robots are currently being tested and not yet reported to have caused harm, this situation fits the definition of an AI Hazard, as the development and use of these systems could plausibly lead to an AI Incident involving significant harm in the future.