Humanoid Robot Injures Child During Dance Performance in Shaanxi

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A humanoid robot performing a dance in a Shaanxi shopping mall struck a child in the face with its mechanical arm, causing injury. The robot failed to detect the child and continued its routine, highlighting inadequate safety measures and AI malfunction. Experts urge mandatory collision avoidance systems for public robots.[AI generated]

Why's our monitor labelling this an incident or hazard?

The humanoid robot is an AI system capable of autonomous or semi-autonomous movement and interaction. The robot's unexpected slap caused direct physical harm to a child, which is injury to a person. The event is a clear example of an AI system's malfunction or failure in a public setting leading to harm. Therefore, it qualifies as an AI Incident under the definition of injury or harm to a person caused directly or indirectly by the AI system's use or malfunction.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Robots, sensors, and IT hardwareConsumer services

Affected stakeholders
Children

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

大陆机器人表演时掌掴围观男孩 引爆全网 | 人形机器人 | 跳舞 | 大纪元

2026-03-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The humanoid robot is an AI system capable of autonomous or semi-autonomous movement and interaction. The robot's unexpected slap caused direct physical harm to a child, which is injury to a person. The event is a clear example of an AI system's malfunction or failure in a public setting leading to harm. Therefore, it qualifies as an AI Incident under the definition of injury or harm to a person caused directly or indirectly by the AI system's use or malfunction.
Thumbnail Image

大陸機器人表演時掌摑圍觀男孩 引爆全網 | 人形機器人 | 跳舞 | 大紀元

2026-03-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves a humanoid robot (an AI system) performing in a public setting. The robot's sudden arm movement caused direct physical harm to a child, which is a clear injury to a person. The incident is a direct consequence of the robot's use and malfunction or unexpected behavior during the performance. Therefore, it meets the definition of an AI Incident as the AI system's use directly led to harm to a person.
Thumbnail Image

中國機器人又出事了 表演中揮臂掌摑小男孩(圖) - 新聞 陝西 - 看中國新聞網 - 海外華人 歷史秘聞 社會百態 -

2026-03-24
看中国
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—a humanoid robot with autonomous or semi-autonomous capabilities performing complex movements. The robot's sudden striking of a child caused direct physical harm, which is a clear injury to a person. The incident stems from the robot's use and possible malfunction or insufficient safety design, directly leading to harm. Therefore, it meets the definition of an AI Incident. The article also discusses systemic issues with safety measures and other similar incidents, but the primary classification remains an AI Incident due to realized harm.
Thumbnail Image

陸童近看機器人跳舞「遭手臂巴臉」痛哭 專家籲強制加裝防撞│TVBS新聞網

2026-03-23
TVBS
Why's our monitor labelling this an incident or hazard?
The event involves a humanoid robot performing autonomous or semi-autonomous actions (dancing with mechanical arms) in a public space, which implies the presence of an AI system controlling or coordinating its movements. The robot's failure to detect and avoid the child, resulting in a physical strike causing injury, is a direct harm to a person. The lack of safety controls and the robot continuing its motions after the incident further indicate malfunction or inadequate safety design. Therefore, this qualifies as an AI Incident under the definition of injury or harm to a person caused directly or indirectly by the use or malfunction of an AI system.
Thumbnail Image

Clip: Robot mất kiểm soát khi biểu diễn, vung tay tát trúng mặt bé trai

2026-03-27
Đời sống pháp luật
Why's our monitor labelling this an incident or hazard?
The robot is described as having advanced sensing systems and performing autonomous movements, indicating the presence of an AI system. The incident involved the robot's arm hitting a child, causing physical harm, which is a direct injury to a person. This harm resulted from the robot's malfunction or failure to safely operate, meeting the definition of an AI Incident. Prior similar incidents with the same robot model reinforce the assessment of risk and harm caused by the AI system's use.
Thumbnail Image

Robot hình người Trung Quốc bứt tốc: Cuộc đua AI chạm 'ranh giới nguy hiểm'

2026-03-27
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The article primarily addresses the potential risks and governance challenges associated with the development and deployment of humanoid AI robots in China. It highlights concerns about data usage, legal responsibility, and safety but does not describe any realized harm or incidents resulting from AI system malfunction or misuse. The discussion of standards and regulatory frameworks indicates proactive measures to manage plausible future risks. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to harm if not properly managed, but no actual AI Incident is reported.
Thumbnail Image

Giới chức Mỹ đề xuất cấm robot Trung Quốc trong khu vực công do lo ngại an ninh

2026-03-26
VietnamPlus
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses robots produced by Chinese companies, which are humanoid and likely AI-enabled, given their described capabilities and applications. The legislative proposal aims to prevent potential harms related to data security and espionage, which are plausible harms linked to the use of these AI systems. Since the harm is not yet realized but is a credible risk prompting preventive legislation, the event fits the definition of an AI Hazard. There is no indication of an actual incident or realized harm, nor is the article primarily about responses to past incidents or general AI news, so it is not an AI Incident or Complementary Information. It is not unrelated because the focus is on AI-enabled robots and their security implications.
Thumbnail Image

Robot hình người có vũ trang xuất hiện trên chiến trường Nga-Ukraine

2026-03-27
VnReview - Cộng đồng đánh giá, tư vấn sản phẩm và thông tin khoa học đời sống
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Phantom MK-1 humanoid robot) equipped with AI for autonomous decision-making and armed with lethal weapons, deployed in an active war zone. While no direct harm or incident is reported, the nature of the system and its deployment in conflict plausibly pose significant risks of injury, violation of rights, and escalation of violence. The ethical concerns and warnings cited reinforce the credible potential for harm. Since harm has not yet materialized or been documented, this is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and deployment of the AI system, not on responses or updates to past incidents. It is not unrelated because the event clearly involves AI systems with military applications and associated risks.
Thumbnail Image

Trung Quốc trình làng "bầy sói robot" có thể tham gia chiến trận

2026-03-27
VnReview - Cộng đồng đánh giá, tư vấn sản phẩm và thông tin khoa học đời sống
Why's our monitor labelling this an incident or hazard?
The event involves the development and unveiling of AI-enabled autonomous military robots equipped with weapons, which clearly constitute AI systems. While no actual harm or incident is reported, the intended use of these systems in warfare and their advanced autonomous capabilities create a credible risk of causing injury, death, or other harms in the future. The article emphasizes the competitive development of such systems globally, underscoring the plausible risk of their deployment leading to AI Incidents. Since harm is not yet realized but is plausible, the classification as an AI Hazard is appropriate.
Thumbnail Image

Robot mất kiểm soát khi biểu diễn, tát trúng bé trai

2026-03-26
Ngoisao
Why's our monitor labelling this an incident or hazard?
The robot involved is an AI-enabled system with sensors and autonomous movement capabilities. The incident resulted in direct physical harm to a child due to the robot's uncontrolled movement during its performance. This fits the definition of an AI Incident because the AI system's malfunction or failure to prevent harm led directly to injury, fulfilling the criteria of harm to a person (a).
Thumbnail Image

Đệ nhất phu nhân Hoa Kỳ gây tranh cãi với tương lai robot hình...

2026-03-26
VnReview - Cộng đồng đánh giá, tư vấn sản phẩm và thông tin khoa học đời sống
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (a humanoid robot with advanced AI features) and discusses its use and potential future applications in education. There is no indication that the AI system has caused any injury, rights violations, disruption, or other harms at this time. The concerns and debates are about possible future impacts, making this a plausible future risk rather than an incident. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to AI-related harms in the future, but no harm has yet materialized.
Thumbnail Image

Robot giao hàng húc vỡ cửa kính bến xe buýt

2026-03-25
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The delivery robots are AI systems performing autonomous navigation and delivery. Their collisions caused direct harm to property (broken glass at bus stops). Although no injuries occurred, the property damage and potential safety risks constitute harm under the AI Incident definition. The companies' acknowledgment and investigation confirm the AI systems' involvement in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Một "Kẻ hủy diệt" ngoài đời thực đã xuất hiện! Robot...

2026-03-28
VnReview - Cộng đồng đánh giá, tư vấn sản phẩm và thông tin khoa học đời sống
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered humanoid robots and unmanned vehicles used in active combat roles causing casualties and psychological harm, fulfilling the criteria for AI systems directly or indirectly leading to injury or harm to people (harm category a). The deployment of these AI systems in warfare and their autonomous lethal capabilities represent realized harm, not just potential risk. The discussion of future large-scale production and autonomous decision-making further supports the severity and scale of the incident. Hence, this is an AI Incident rather than a hazard or complementary information, as harm is already occurring and AI systems are central to the event.
Thumbnail Image

Robot 'tát' trúng người khi đang múa may quay cuồng

2026-03-27
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The robot uses AI systems for environment recognition and complex motion control, which led to an unintended physical contact with a child during a public performance. The harm is realized (the child was hit), and the AI system's malfunction or failure to avoid the collision is a direct cause. Therefore, this event qualifies as an AI Incident under the framework, as it involves injury or harm to a person caused by the use of an AI system.
Thumbnail Image

Uber đặt cược vào robot và drone giao hàng

2026-03-29
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous delivery robots and drones. However, it does not describe any actual harm or incidents resulting from their use. The challenges mentioned are about user acceptance and regulatory barriers, which are typical developmental hurdles rather than incidents or hazards. Since no harm has occurred and the article mainly reports on ongoing trials and strategic plans, it fits best as Complementary Information, providing context and updates on AI system deployment and ecosystem evolution without describing an AI Incident or AI Hazard.
Thumbnail Image

Watch: This Chinese robot can cook, clean and even makes your bed

2026-03-27
Firstpost
Why's our monitor labelling this an incident or hazard?
The robot clearly involves AI systems as it autonomously performs complex household tasks. While the article highlights the potential for automation to impact jobs and society, it does not describe any realized harm or incidents caused by the robot. The concerns raised are about plausible future harms related to automation and job replacement, which fits the definition of an AI Hazard. There is no indication of an AI Incident or Complementary Information, and the event is more than general AI news, as it focuses on a specific AI-enabled system with potential societal impact.
Thumbnail Image

Inside rogue robot crisis - from battering kids to restaurant food fights

2026-03-27
Daily Star
Why's our monitor labelling this an incident or hazard?
The events described involve AI systems in humanoid robots whose malfunction or uncontrolled behavior directly caused physical injury (e.g., the engineer's open wound, the child being slapped) and psychological harm (e.g., the elderly woman frightened and hospitalized). These constitute realized harms to persons, fitting the definition of AI Incidents. The article also references expert opinions on the risks and calls for regulatory and safety improvements, but the primary focus is on actual incidents of harm caused by AI systems in robots, not just potential future risks or general commentary. Therefore, the classification is AI Incident.
Thumbnail Image

Chinese household robot video goes viral, drawing mixed reactions

2026-03-27
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The robot is an AI system given its autonomous operation and AI perception capabilities. The article mentions concerns about privacy, data security, and hacking risks, which are credible potential harms that could arise from the robot's use. However, no actual harm or incident is reported. The event thus fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred. It is not Complementary Information because it does not update or respond to a prior incident, nor is it unrelated as it clearly involves an AI system and potential risks.
Thumbnail Image

Chinese Humanoid Robot Slaps Child in Viral Demo Mishap, Sparking Safety Concerns

2026-03-30
eWEEK
Why's our monitor labelling this an incident or hazard?
The robot is an AI system due to its autonomous, complex physical movements controlled by AI. The incident directly caused harm to a person (the child struck by the robot), fulfilling the criteria for an AI Incident. The robot's failure to stop its routine when handlers intervened indicates a malfunction or inadequate safety design. The harm is realized (the child was struck), not just potential. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dancing Robot Slaps Child In Face At Public Show In China, Video Goes Viral

2026-03-28
NDTV
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous or semi-autonomous movements. The incident directly caused physical harm to a child due to the robot's actions during its programmed routine. The event involves the use and malfunction of the AI system leading to injury, which fits the definition of an AI Incident under harm to a person. The article also references prior safety incidents with the same firm's robots, reinforcing the pattern of harm linked to AI system use.
Thumbnail Image

'Excited for the Future or Lowkey Terrified': Humanoid Robot Chases Children at Brooklyn Bridge Park in New York

2026-03-28
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (Unitree G1 humanoid robot) and its use in a public park. However, no harm or violation is reported. The robot's behavior is described as controlled or possibly remote-controlled, with no malfunction or misuse leading to injury or rights violations. The public reaction is mixed but does not indicate any incident or hazard. The event is a descriptive account of AI presence and societal response, fitting the definition of Complementary Information rather than Incident or Hazard.
Thumbnail Image

SHOCKING! Dancing Robot Hits Child During Public Show in China, Video Goes Viral (WATCH)

2026-03-28
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—a humanoid robot with multiple degrees of freedom performing autonomous or semi-autonomous movements. The robot struck a child, causing physical harm or risk of harm, which fits the definition of injury or harm to a person. The incident is a direct consequence of the robot's operation in a public space without adequate safeguards, fulfilling the criteria for an AI Incident. The presence of previous similar incidents further supports the classification as an incident rather than a hazard or complementary information.