China Unveils Armed AI Robotic 'Wolves' for Battlefield Use

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China has publicly showcased a new generation of AI-powered robotic quadrupeds, dubbed 'robotic wolves,' designed to replace human soldiers in dangerous combat scenarios. These autonomous robots can navigate difficult terrain, coordinate in groups, and execute precise lethal actions, raising significant concerns about AI-driven harm in modern warfare.[AI generated]

Why's our monitor labelling this an incident or hazard?

The described military robots are AI systems with autonomous capabilities for lethal force application and battlefield coordination. Their development and potential deployment could plausibly lead to injury or harm to persons (harm category a) through autonomous attacks. The article does not report an actual incident of harm but highlights the capabilities and intended use, implying a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated as it clearly involves AI systems with military applications and potential for harm.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

China dezlănțuie "haita de lupi" robotici. Fiarele fără suflet schimbă regulile războiului modern. Roboții trag cu precizie și urcă pe scări

2025-08-06
adevarul.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems in the form of autonomous military robots ('robotic wolves') that perform complex tasks such as coordinated attacks, target recognition, and precise shooting. Their use in combat directly involves the AI systems in causing harm to human life and altering warfare dynamics, which constitutes injury or harm to persons (harm category a). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to potential or actual harm in a military context.
Thumbnail Image

China a prezentat cei mai noi roboți "lupi" militari care pot ataca în "haită"

2025-08-06
Digi24
Why's our monitor labelling this an incident or hazard?
The described military robots are AI systems with autonomous capabilities for lethal force application and battlefield coordination. Their development and potential deployment could plausibly lead to injury or harm to persons (harm category a) through autonomous attacks. The article does not report an actual incident of harm but highlights the capabilities and intended use, implying a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated as it clearly involves AI systems with military applications and potential for harm.
Thumbnail Image

Investiția care poate schimba modul în care se vor purta războaiele: China va avea o armată dotată cu roboți 'lupi' înarmați, care pot înlocui soldații umani

2025-08-06
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: robotic quadrupeds with enhanced recognition and combat capabilities, operating autonomously or in coordinated groups. Their use in armed conflict directly relates to harm (injury or death to persons) and disruption of military operations. The deployment of such AI-enabled weapons systems constitutes a direct AI Incident because the AI's use in combat has already materialized or is imminent, leading to potential or actual harm to human soldiers and adversaries. This surpasses a mere hazard or complementary information, as the article indicates active deployment and operational use, not just potential or future risk.
Thumbnail Image

Cum arată noii roboți ai Chinei care îi pot înlocui pe soldaţii umani. Se apropie pe nesimțite de dușmani - Foto în articol

2025-08-06
DCnews
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled military robots with autonomous capabilities for lethal use, which can directly lead to harm to persons and disruption of critical infrastructure (military operations). While no actual harm event is described, the deployment of such systems inherently carries a credible risk of causing harm in the future. Therefore, this qualifies as an AI Hazard under the framework, as the AI systems' development and use could plausibly lead to an AI Incident involving injury or death in combat. It is not Complementary Information because the article is not about responses or updates to past incidents, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

China revoluționează războiul: armată dotată cu roboți "lupi" care pot înlocui soldații umani - Stiri Bistrita - Ziare Bistrita - Gazeta de Bistrita -Stiri online

2025-08-06
Stiri Bistrita - Ziare Bistrita - Gazeta de Bistrita -Stiri online
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled autonomous military robots capable of lethal actions and coordinated combat operations, which clearly involve AI systems. While no actual harm or incident is reported, the deployment of such systems inherently carries a credible risk of causing injury, death, and broader harm in warfare. The development and public showcasing of these autonomous weapon systems represent a plausible future risk of AI-related harm, fitting the definition of an AI Hazard. There is no indication of a realized harm event yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential for harm from these AI systems.
Thumbnail Image

VIDEO China își prezintă noii lupi roboți militari: sunt greu de detectat, trag cu precizie asupra țintelor și merg pe teren accidentat - TechRider.ro

2025-08-06
TechRider.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled military robots with autonomous or semi-autonomous capabilities, including target recognition, precise shooting, terrain navigation, and coordinated group behavior. These systems are designed for lethal military applications, which inherently carry a credible risk of causing injury or death (harm to persons) and disruption of critical infrastructure (military operations). Since the article does not report any actual incidents of harm caused by these robots but highlights their capabilities and potential battlefield use, the event fits the definition of an AI Hazard rather than an AI Incident. The development and deployment of such autonomous weapon systems are widely recognized as potential sources of significant future harm, justifying classification as an AI Hazard.
Thumbnail Image

Dogs of war: China touts killer robot 'wolves'

2025-08-06
SpaceWar
Why's our monitor labelling this an incident or hazard?
The described robotic 'wolves' are AI systems with autonomous capabilities for navigation, reconnaissance, and attack, directly linked to military combat operations. Their deployment in battlefield conditions implies direct involvement in causing harm to persons (soldiers or enemies) and communities, meeting the definition of an AI Incident. The article indicates these systems are already in use or demonstrated in military drills, not merely potential future threats, thus qualifying as an incident rather than a hazard. The harm is direct and significant, involving lethal force and battlefield operations.
Thumbnail Image

Dogs of war: China touts killer robot 'wolves' | WATCH

2025-08-06
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled military robots ('killer robot wolves') capable of autonomous navigation, reconnaissance, and precision firing. While no actual harm or incident is reported, the nature of these AI systems—armed autonomous robots designed for combat—poses a plausible risk of causing injury or death and other harms in future conflicts. The event is about the development and showcasing of these AI systems, which could lead to AI Incidents if used in warfare. Since no harm has yet occurred or been reported, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article is not about responses or updates to prior incidents, nor is it unrelated as it clearly involves AI systems with potential for harm.
Thumbnail Image

Dogs of war: China touts killer robot 'wolves'

2025-08-06
Arab News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous or semi-autonomous military robots with lethal capabilities. Their use in combat scenarios inherently carries a significant risk of injury or death to people, disruption of security, and possible violations of human rights. Since the article does not report any actual harm or incident but focuses on the development and demonstration of these AI-enabled weapons, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident. The article's emphasis on future battlefield automation and intimidation further supports the classification as a hazard rather than an incident or complementary information.
Thumbnail Image

Dogs Of War: China Touts Killer Robot 'wolves' - Forbes India

2025-08-07
Forbes India
Why's our monitor labelling this an incident or hazard?
The described 'killer robot wolves' are AI systems as they perform autonomous or semi-autonomous navigation, target identification, and firing tasks in complex environments. Their use in military operations with lethal capabilities directly relates to potential harm to human life and raises significant risks of injury or death. Although no specific incident of harm is reported, the deployment and use of such AI-enabled armed robots plausibly could lead to AI incidents involving injury or death, making this an AI Hazard. The article focuses on showcasing the technology and its capabilities rather than reporting an actual harm event, so it does not qualify as an AI Incident. It is more than general AI news or product launch, given the military and lethal context, so it is not Unrelated or merely Complementary Information.
Thumbnail Image

Dogs of war: China touts killer robot 'wolves'

2025-08-06
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled military robots capable of autonomous or semi-autonomous lethal actions, including reconnaissance and precision shooting. While no actual harm or incident is reported, the development and potential deployment of such systems in combat plausibly could lead to injury or death and disruption of military operations. The AI system's role is pivotal in enabling autonomous lethal force and coordinated attacks. Since the article does not report realized harm but highlights credible future risks, the event is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the new AI-enabled weapon system's capabilities and implications, not on responses or governance. It is not unrelated because the AI system's presence and potential for harm are central to the report.
Thumbnail Image

China Adds 'Robot Wolves' to Military Exercises

2025-08-07
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled autonomous or semi-autonomous military drones capable of coordinated swarm combat and precision strikes, which are AI systems by definition. While no actual harm or incident is reported, the deployment of armed autonomous drones in military exercises plausibly could lead to injury or harm in future combat scenarios, constituting a credible risk of AI-related harm. The event is not a realized incident but a clear potential hazard. It is not merely complementary information or unrelated news, as the AI system's development and use in a military context with lethal capabilities pose a plausible future risk of harm.
Thumbnail Image

'Will alter warfare': China shows off its 'wolf robots' during military drill | Watch

2025-08-07
India TV News
Why's our monitor labelling this an incident or hazard?
The 'wolf robots' are AI systems with autonomous operational capabilities in military contexts, which inherently pose risks of harm. Since the article only reports their unveiling and demonstration without any actual harm occurring, this constitutes a plausible future risk rather than a realized incident. Therefore, this event qualifies as an AI Hazard because the development and deployment of these AI-enabled military robots could plausibly lead to AI Incidents involving harm in warfare scenarios.
Thumbnail Image

China Shows Off Armed Attack Robots

2025-08-08
Futurism
Why's our monitor labelling this an incident or hazard?
The robot wolves are AI-enabled systems used in military combat, equipped with weapons and capable of navigating terrain and executing precision strikes. Their deployment and use in combat drills indicate active use of AI systems in potentially lethal applications. While the article does not report any actual harm occurring yet, the development and operationalization of armed AI robots with combat capabilities plausibly pose significant risks of injury, death, and disruption, qualifying this as an AI Hazard. The event does not describe a realized incident but a credible future risk from the use of AI in autonomous or semi-autonomous weapons systems.
Thumbnail Image

"Steel Wolves Hunt With Soldiers": China's Military Unveils Armed Robot Pack for Precision Strikes, Recon, and High-Terrain Combat - Visegrád Post

2025-08-10
Visegrad Post
Why's our monitor labelling this an incident or hazard?
The robot wolves are AI systems designed for autonomous or semi-autonomous military tasks including armed assault and reconnaissance. Their deployment in military exercises alongside human soldiers indicates active use of AI systems in potentially lethal roles. This directly implicates the AI systems in possible harm to human life and military operations, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future risks but describes actual deployment and operational use, which can lead to injury or harm. Hence, it is not merely a hazard or complementary information but an incident involving AI systems causing or enabling harm.
Thumbnail Image

What are China's robot wolves? Can PLA deploy AI-enabled robot soldiers on LAC along Indian borders?

2025-08-07
Daily News and Analysis (DNA) India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as robot soldiers equipped with weapons and capable of autonomous or semi-autonomous military operations. The article does not report any actual deployment or harm caused by these AI-enabled robots but discusses their development, demonstration, and potential future deployment along the LAC. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (military conflict casualties, escalation, or disruption) if deployed. There is no indication of realized harm or incident yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential military use and risks of these AI systems rather than updates or responses to past events. Therefore, the classification is AI Hazard.
Thumbnail Image

" Des frappes précises jusqu'à 100 m " : c'est quoi ces loups robots militaires déployés par la Chine ?

2025-08-06
Le Parisien
Why's our monitor labelling this an incident or hazard?
The described 'wolf' robots are AI systems with autonomous capabilities for reconnaissance, coordination, and precise strikes. Their deployment in military operations directly involves the use of AI systems that can cause injury or harm to people and property. The article indicates these systems are operational and used in combat scenarios, which meets the criteria for an AI Incident due to direct involvement of AI in causing harm.
Thumbnail Image

Des robots "loups" militaires en Chine: c'est quoi?

2025-08-06
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled military robots capable of autonomous or coordinated lethal actions, which directly relate to AI systems' development and use. While no actual harm or incident is reported, the nature of these systems and their intended use in combat clearly pose a credible risk of causing injury or death. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to persons. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated, as the focus is on the AI system's capabilities and potential risks.
Thumbnail Image

Des robots "loups" armés pour remplacer les soldats au combat

2025-08-06
L'essentiel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (armed quadruped robots) designed for lethal military use, which inherently carry a high risk of causing injury or death. The article does not report a specific harm event but highlights the deployment and capabilities of these AI-enabled robots, which plausibly could lead to harm in combat. Therefore, this qualifies as an AI Hazard due to the credible potential for harm from the use of autonomous armed robots in warfare.
Thumbnail Image

Vidéo : l'armée chinoise dévoile un commando de robots loups tueurs

2025-08-08
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (robots with tactical intelligence and autonomous combat capabilities) being developed and deployed by the Chinese military. While no specific harm or incident is reported as having occurred yet, the use of armed AI robots in combat plausibly could lead to injury, death, or other harms. The event concerns the development and use of AI systems with lethal capabilities, which is a recognized AI Hazard under the framework. Since no actual harm has been reported yet, it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the deployment of AI-enabled lethal robots with clear potential for harm.
Thumbnail Image

Des loups robots " ont fauché 12 cibles " lors d'un entraînement militaire chinois, choquant les analystes par leur précision et leur vitesse fulgurante - Visegrád Post

2025-08-10
Visegrad Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous armed robots used in military operations. While no direct harm is reported in the exercise, the deployment of such AI-enabled weapon systems plausibly leads to injury or death in future conflicts, fulfilling the criteria for an AI Hazard. The article highlights the robots' capabilities and potential battlefield impact, emphasizing the credible risk of harm and ethical issues. Since no actual harm has yet occurred, it is not an AI Incident. The focus is on the potential consequences of these AI systems in warfare, fitting the definition of an AI Hazard.
Thumbnail Image

机器狼亮相阅兵式,开启陆战无人协同新篇章

2025-09-04
中关村在线
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI system involving autonomous and remote-controlled robotic units equipped with weapons and reconnaissance tools, used in military operations. Although the article does not report any actual harm or incidents, the development and deployment of such armed autonomous systems inherently carry significant risks of injury, death, and violations of human rights. The article highlights their potential to replace human soldiers and perform combat tasks, which plausibly could lead to AI incidents involving harm. Hence, this event is best classified as an AI Hazard, reflecting the credible potential for future harm stemming from the AI system's use in armed conflict.
Thumbnail Image

机器狼亮相九三阅兵,人机协同作战迈入新阶段

2025-09-05
中关村在线
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI system as it is a quadruped robot equipped with weapons and reconnaissance capabilities, operating in a military context with human-machine collaboration. The article does not report any actual harm or incident caused by the system but emphasizes its operational capabilities and deployment. Given the nature of armed autonomous or semi-autonomous robots, their use could plausibly lead to injury, violation of human rights, or other significant harms. Hence, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

机器狼亮相九三阅兵,展现侦察打击一体化能力

2025-09-05
中关村在线
Why's our monitor labelling this an incident or hazard?
The "Machine Wolf" is an AI-enabled unmanned ground combat system equipped with weapons and reconnaissance capabilities, representing an AI system used in military operations. While the article does not report any actual harm or incidents caused by the system, its deployment as an armed autonomous or semi-autonomous system inherently carries plausible risks of harm, including injury or death, disruption, or violations of rights if misused or malfunctioning. Therefore, this event constitutes an AI Hazard due to the credible potential for future harm stemming from the use of AI in armed unmanned systems.
Thumbnail Image

机器狗 机器狼 无人作战新力量

2025-09-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI system capable of autonomous action and remote control in military operations, directly involved in combat tasks such as reconnaissance and firepower. Its use in actual military exercises and potential combat situations implies direct involvement of AI systems in activities that could cause injury or harm to persons (soldiers or enemies). Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm or the potential for harm in a military context.
Thumbnail Image

机器狼亮相九三阅兵:最多负重20公斤 30秒极速换电池

2025-09-03
驱动之家
Why's our monitor labelling this an incident or hazard?
The 'Machine Wolf' is an AI system due to its autonomous capabilities and complex operational functions. However, the article only presents its debut and capabilities without any indication of harm or misuse. Since no direct or indirect harm has occurred, but the system's military application implies a credible risk of future harm, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system with potential for harm.
Thumbnail Image

科幻!阅兵的"机器狼"与机器狗有啥区别:官方给出答案 动态战斗效果展示

2025-09-05
驱动之家
Why's our monitor labelling this an incident or hazard?
The 'machine wolf' is an AI-enabled autonomous or semi-autonomous robotic system equipped with weapons and reconnaissance devices, used in military operations. Its deployment in coordinated combat scenarios with soldiers indicates active use of AI systems in potentially harmful contexts (military combat). While the article does not report a specific incident of harm, the use of AI-enabled armed robots in warfare inherently carries a plausible risk of harm to persons and communities. However, since no actual harm or incident is described, but the article highlights the operational use and capabilities of these AI systems, this constitutes an AI Hazard due to the plausible future harm from their deployment in combat.
Thumbnail Image

是"狼"还是"狗"?它亮相九三阅兵后网友发来疑问

2025-09-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "machine wolf" quadruped robot with autonomous capabilities) actively used in military operations, including reconnaissance and armed engagement. The article describes its deployment and operational use, which directly relates to potential harm to persons and communities in conflict (harm to health and communities). The AI system's use in combat scenarios is a direct factor in these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or future concerns but reports on an AI system already in use with direct implications for harm.
Thumbnail Image

帅!"机器狼"亮相九三阅兵,看看什么来头→

2025-09-04
南方网
Why's our monitor labelling this an incident or hazard?
The "Machine Wolf" is an AI-enabled autonomous combat robot system used in military operations, capable of independent action and coordination with humans. Its deployment in a military parade and described combat roles imply the use of AI systems in potentially lethal applications. While the article does not report any actual harm or incidents caused by the system, the development and use of autonomous weapon systems with AI capabilities pose plausible risks of harm, including injury or death, violations of human rights, and disruption of security. Therefore, this event represents an AI Hazard due to the credible potential for future harm stemming from the use of such AI-enabled autonomous weapons.
Thumbnail Image

再说一遍,不是"机器狗"!"机器狼"战斗力猛增

2025-09-05
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous armed quadruped robots) used in military contexts, which inherently carry risks of causing injury or harm. Since no actual harm or incident is reported, but the deployment of such AI-enabled weaponized systems plausibly could lead to harm, this constitutes an AI Hazard rather than an AI Incident. The article highlights the potential for these systems to impact combat operations and thus the potential for harm, fitting the definition of an AI Hazard.
Thumbnail Image

阅兵的"机器狼"与机器狗有啥区别 无人作战新突破

2025-09-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI-enabled autonomous or semi-autonomous unmanned ground combat system equipped with weapons and reconnaissance capabilities. Its deployment in coordinated combat operations and autonomous actions directly involve AI systems. Given that these systems are used in military operations with lethal capabilities, their use inherently involves risks of injury or harm to persons (harm category a). The article describes actual use and operational deployment, not just potential or future risks, thus constituting an AI Incident due to the direct involvement of AI systems in armed conflict scenarios with real harm potential.
Thumbnail Image

再说一遍 不是"机器狗" 九三阅兵亮点揭晓

2025-09-05
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The 'machine wolf' is an AI system as it involves autonomous and remote-controlled robotic combat vehicles with advanced reconnaissance and strike capabilities. The article highlights their use in military operations, implying potential for harm through their combat functions. However, the article does not report any actual harm or incidents caused by these AI systems; it focuses on their capabilities and deployment in a military parade. Therefore, this event represents a plausible future risk of harm due to the use of AI in autonomous weapons, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

九三阅兵 "机器狼群" ,首次亮相一个字:帅!

2025-09-04
广西新闻网
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI system as it involves autonomous and remote-controlled unmanned vehicles with capabilities such as reconnaissance, combat, and tactical coordination with humans. The event describes their deployment and operational use, but there is no indication of any harm or malfunction resulting from their use. The article focuses on showcasing the technology and its capabilities, without reporting any injury, rights violations, or other harms. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI-enabled military technology developments.
Thumbnail Image

九三閱兵「機器狼群」首次亮相 一個字:帥!

2025-09-04
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous and remotely controlled unmanned combat vehicles equipped with weapons and reconnaissance equipment. Although the article does not report any actual harm or malfunction, the nature of armed autonomous systems inherently carries plausible risks of injury, violation of human rights, or harm to communities in conflict scenarios. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the deployment and use of these AI-enabled military robots.
Thumbnail Image

四足机器狗化身"机器狼"亮相九三阅兵,解码作战新方式

2025-09-03
Baidu.com
Why's our monitor labelling this an incident or hazard?
The quadruped robot dog ('machine wolf') is an AI system designed for autonomous or semi-autonomous combat tasks, including reconnaissance and attack with mounted rifles. Its use in coordinated military operations with soldiers indicates active deployment and use of AI in a context that can cause injury or harm to persons and communities. The article describes actual operational use and demonstration, not just development or potential use, so the harm is direct or imminent. Given the AI system's role in enabling lethal force and combat operations, this event meets the criteria for an AI Incident under harm to persons and communities.
Thumbnail Image

再说一遍,不是"机器狗"!

2025-09-05
环球网
Why's our monitor labelling this an incident or hazard?
The 'machine wolf' is an AI system as it involves autonomous operation, reconnaissance, and combat capabilities enhanced by AI. The article discusses its use in military exercises and its potential to change warfare tactics. While no specific harm or incident is reported, the deployment of armed autonomous robots with AI capabilities in military contexts poses plausible risks of harm, including injury or violation of human rights, if used in conflict. Therefore, this event represents an AI Hazard due to the credible potential for harm from the use of AI-enabled autonomous weapons systems.
Thumbnail Image

机器狼亮相九三阅兵:最多负重20公斤 30秒极速换电池 - cnBeta.COM 移动版

2025-09-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The machine wolf is an AI-enabled robotic system with autonomous capabilities and remote control, designed for military applications including potentially dangerous tasks like mine clearance and assault. While no harm has been reported yet, the deployment of such autonomous military robots plausibly could lead to harm including injury or death, disruption, or violations of rights. Therefore, this event represents an AI Hazard due to the credible risk of future harm from its use in military operations.
Thumbnail Image

机器狼",亮了!原型是四足机器狗,出现在陆上无人作战方队,作战群分工明确_手机网易网

2025-09-04
m.163.com
Why's our monitor labelling this an incident or hazard?
The "Machine Wolf" is an AI-enabled autonomous robotic system used in military operations, involving AI in its development and use. While the article does not report any actual harm or incidents caused by the system, it clearly indicates the system's autonomous capabilities and potential for combat use. Given the military application and the autonomous nature of the system, there is a plausible risk that its deployment or malfunction could lead to harm, including injury or death, disruption, or other significant harms. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the AI system's use in armed conflict.
Thumbnail Image

菲拉尔山巅:无人机协同机器狗,狠揍俄国狼 ,此战痛快|故事_手机网易网

2025-09-04
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of drones and climbing robotic dogs—both AI systems—deployed to counter a harmful event (wolf attacks on livestock). The AI systems are used to protect property and communities from harm, fulfilling the definition of AI System involvement in use. However, the AI systems themselves do not cause harm; instead, they prevent or mitigate harm. There is no indication that the AI systems malfunctioned or caused any injury or violation of rights. The event describes a positive application of AI technology in a real-world context, providing valuable complementary information about AI deployment and its societal impact. Hence, it does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

本次阅兵,新域新质作战力量是一大亮点 -- -- 侦打突击、扫雷排爆、班组支援等无人战车远程操控、自主行动、灵活编组,实现陆上有人无人协同作战新突破。

2025-09-05
opinion.haiwainet.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous and remotely operated unmanned ground vehicles used in military contexts. Their deployment and tactical integration could plausibly lead to harms such as injury or death in combat, disruption, or escalation of conflict, which fits the definition of an AI Hazard. However, since no specific harm or incident has occurred or is reported, and the article focuses on the capabilities and strategic significance rather than any realized harm, it does not qualify as an AI Incident. It is more than general AI news because it discusses the operational use and potential battlefield impact of AI systems, so it is not unrelated or merely complementary information. Therefore, the classification is AI Hazard.
Thumbnail Image

不是"机器狗"不是"洗衣机"阅兵式上的新式武器是啥?

2025-09-05
华商网
Why's our monitor labelling this an incident or hazard?
The 'machine wolf' is explicitly described as an autonomous or remotely controlled robotic combat system with enhanced reconnaissance and strike capabilities, clearly involving AI systems. Its deployment in coordinated combat operations with soldiers indicates active use of AI systems in a military context, which can directly lead to harm in warfare scenarios (harm to persons and communities). The LY-1 laser weapon, while not explicitly stated as AI-driven, is advanced military technology likely integrated with AI for targeting and control, contributing to the overall AI-enabled military capability. Given the direct use of AI-enabled autonomous weapons systems in combat, this event qualifies as an AI Incident due to the direct involvement of AI systems in activities that inherently cause harm in warfare.
Thumbnail Image

阅兵场上的"机器狼"与机器狗有啥区别?官方给出动态战斗效果展示 - cnBeta.COM 移动版

2025-09-05
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled autonomous or semi-autonomous robotic systems ('machine wolves') equipped with weapons and reconnaissance capabilities used in military operations alongside human soldiers. The involvement of AI in their operation is reasonably inferred from their described autonomous functions and combat roles. The deployment of such systems in active combat scenarios directly leads to potential or actual harm to persons and military conflict dynamics, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or future hazards but shows current use in operations, indicating realized harm or risk thereof.
Thumbnail Image

再说一遍,不是"机器狗"!阅兵场上的"小可爱",你注意到了吗?

2025-09-05
金羊网
Why's our monitor labelling this an incident or hazard?
The "machine wolf" is an AI-enabled unmanned combat vehicle with autonomous and remote control capabilities, used in military operations including reconnaissance and attack. Its deployment and use in coordinated combat scenarios imply the use of AI systems for autonomous decision-making and action. While the article does not report any harm or incidents caused by these systems, the development and deployment of armed autonomous robots with AI capabilities pose plausible risks of harm, including injury or death, disruption, or violations of rights if misused or malfunctioning. Therefore, this event qualifies as an AI Hazard due to the credible potential for harm inherent in the use of AI-enabled autonomous weapons systems.
Thumbnail Image

信达军工E周刊第195期:重器列阵扬国威,科技强军铸...

2025-09-07
东方财富网
Why's our monitor labelling this an incident or hazard?
The article primarily provides a descriptive overview of military technological advancements involving AI-driven unmanned systems and nuclear fusion progress, along with market and investment analysis. There is no mention of any direct or indirect harm caused by AI systems, nor any plausible imminent risk of harm. The content focuses on reporting developments and economic implications rather than incidents or hazards. Therefore, it fits best as Complementary Information, offering context and updates on AI-related military technology and industry trends without reporting an AI Incident or AI Hazard.
Thumbnail Image

"机器狼"亮相阅兵场,"无人作战"远不止这些,专家揭秘

2025-09-07
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous and intelligent unmanned military equipment. However, there is no indication that these systems have caused any injury, disruption, rights violations, or other harms. The article focuses on their debut and capabilities rather than any realized or imminent harm. Therefore, the event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and expert insights about AI-enabled military technologies and their potential future impact without describing any specific harm or risk event.
Thumbnail Image

Kina na vojnoj paradi pokazala i "robotske vukove". Što je to?

2025-09-03
IndexHR
Why's our monitor labelling this an incident or hazard?
The described "robotic wolves" are AI-enabled military systems capable of autonomous reconnaissance and attack functions, which directly relate to potential harm through their use in military operations. Their deployment and development represent a credible risk of harm to people and communities due to their offensive capabilities. Although no specific incident of harm is reported, the nature of these AI systems and their intended use plausibly lead to significant harm, qualifying this event as an AI Hazard rather than an Incident since no actual harm has been reported yet.
Thumbnail Image

VIDEO 'Robotski vukovi' na kineskoj paradi: O čemu je zpravo riječ i što sve mogu?

2025-09-03
Vecernji.hr
Why's our monitor labelling this an incident or hazard?
The quadrupedal robots described are AI systems used in military contexts for reconnaissance and precision strikes, which inherently carry risks of causing injury, death, or destruction. The article indicates these systems are operational and part of China's military arsenal, implying their use or potential use in conflict scenarios. This meets the criteria for an AI Incident because the AI system's use in military operations directly relates to potential or actual harm to persons and infrastructure. The missile system mentioned, while significant, is not explicitly described as AI-enabled, so the focus remains on the robotic systems. Hence, the event is classified as an AI Incident due to the realized or imminent harm associated with the AI-enabled military robots.
Thumbnail Image

Kina na paradi predstavila vojsku budućnosti. 'Robotski vukovi' mogli bi promijeniti moderno ratovanje više od dronova

2025-09-04
Slobodna Dalmacija
Why's our monitor labelling this an incident or hazard?
The described quadruped robots are AI systems due to their autonomous capabilities in reconnaissance and precision attacks. Their use in military operations directly relates to potential harm such as injury or death, disruption of critical infrastructure, and violations of human rights. Since the article reports their unveiling and capabilities but does not mention any actual harm occurring yet, this event represents a plausible future risk of harm from AI systems in warfare, qualifying it as an AI Hazard rather than an AI Incident.
Thumbnail Image

Kineskom paradom prošetali 'robotski vukovi'; evo što oni sve mogu

2025-09-03
tportal.hr
Why's our monitor labelling this an incident or hazard?
The robotic wolves are AI systems used in military applications with autonomous targeting capabilities. Their use in military parades and exercises indicates active deployment. While the article does not report any actual harm occurring, the nature of these AI-enabled weapons systems and their potential use in conflict could plausibly lead to injury, death, or disruption of critical infrastructure. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm from these AI systems.
Thumbnail Image

Kina na vojnoj paradi predstavila novo oružje: Šta su "roboti vukovi"?

2025-09-03
Telegraf.rs
Why's our monitor labelling this an incident or hazard?
The 'robot wolves' are AI systems used in military contexts capable of autonomous reconnaissance and precision targeting, which directly relates to potential harm through their use in armed conflict. The presentation of such AI-enabled weapons systems constitutes a plausible risk of harm to people and communities due to their lethal capabilities. Although no specific harm is reported as having occurred during the parade, the development and deployment of these AI-powered military robots inherently pose a credible risk of causing injury or harm in future use. Therefore, this event qualifies as an AI Hazard under the framework, as it plausibly could lead to AI Incidents involving injury or harm.