Unitree G1 Humanoid Robot Accidentally Kicks Engineer During AI Motion Test

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During a motion synchronization test in China, Unitree's G1 humanoid robot, controlled by AI to mimic human actions, accidentally kicked an engineer, causing him pain. The incident, widely shared online and commented on by Elon Musk, highlights risks of AI-driven robots malfunctioning and causing unintended harm to humans.[AI generated]

Why's our monitor labelling this an incident or hazard?

The humanoid robot is an AI system as it uses algorithms to synchronize and mimic human motions in real time. The accidental kick causing pain to the engineer is a direct physical harm resulting from the AI system's malfunction or error in motion control. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to injury or harm to a person.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Robots, sensors, and IT hardware

Affected stakeholders
Workers

Harm types
Physical (injury)

Severity
AI incident

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

誤踹工程師私處...人形機器人秒同步痛到蹲地 馬斯克「笑哭」了 | 聯合新聞網

2025-12-28
UDN
Why's our monitor labelling this an incident or hazard?
The humanoid robot is an AI system as it uses algorithms to synchronize and mimic human motions in real time. The accidental kick causing pain to the engineer is a direct physical harm resulting from the AI system's malfunction or error in motion control. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to injury or harm to a person.
Thumbnail Image

宇树G1机器人测试时突然踹工程师一脚 马斯克评论"笑哭"表情

2025-12-28
驱动之家
Why's our monitor labelling this an incident or hazard?
The humanoid robot is controlled by AI systems capable of synchronizing and learning human motions. The robot's unexpected kick caused physical harm to the engineer, which is a direct injury resulting from the AI system's malfunction during use. The event clearly involves an AI system, and the harm is realized, not just potential. Therefore, it qualifies as an AI Incident under the framework.
Thumbnail Image

宇树G1机器人测试时突然踹工程师一脚 马斯克评论

2025-12-28
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The robot is an AI system capable of real-time learning and autonomous movement. The unexpected kick during testing is a malfunction or unintended behavior. Although no injury or harm is reported, the event shows a plausible risk of harm from the AI system's malfunction. Hence, it qualifies as an AI Hazard rather than an AI Incident. The humorous comment by Elon Musk does not affect the classification.
Thumbnail Image

马斯克评宇树机器人踹工程师 同步技术引发热议

2025-12-30
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Yushi G1 robot uses AI-based full-body teleoperation technology to mimic human movements, which is explicitly described. The incident where the robot kicked an engineer causing pain is a direct harm to a person resulting from the AI system's use. Although the robot also shows synchronized emotional expression, the key point is the physical injury caused by the AI-controlled robot. This meets the criteria for an AI Incident as the AI system's use directly led to injury to a person.
Thumbnail Image

宇树G1机器人测试时突然踹工程师一脚 马斯克评论"笑哭"表情 - cnBeta.COM 移动版

2025-12-28
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The G1 robot is an AI system performing autonomous or semi-autonomous actions. The robot unexpectedly kicked the engineer during testing, which constitutes a direct physical impact and potential injury or harm to a person. This fits the definition of an AI Incident as the AI system's use directly led to harm (or at least a physical impact) to a person. Although the harm may be minor or unintended, the event qualifies as an AI Incident due to the direct physical interaction causing harm or risk thereof.
Thumbnail Image

中国宇树机器人测试出意外,工程师被误踢,马斯克评论笑哭表情_手机网易网

2025-12-28
m.163.com
Why's our monitor labelling this an incident or hazard?
The Unitree G1 robot is an AI system capable of real-time motion imitation and learning from video. The robot's incorrect kicking action caused direct physical harm to the engineer, fulfilling the criteria for an AI Incident under injury or harm to a person. The event involves the use and malfunction of an AI system leading to realized harm. The involvement of Elon Musk's comments and social media reactions are complementary but do not change the classification. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

笑翻!宇树机器人测试踢中工程师,马斯克笑哭评论引热议_手机网易网

2025-12-29
m.163.com
Why's our monitor labelling this an incident or hazard?
The humanoid robot is controlled by AI to perform synchronized combat actions, indicating AI system involvement. The robot unexpectedly kicked the engineer, causing physical contact that can be reasonably inferred as harm or injury. This is a direct consequence of the AI system's use during testing. The event is not merely a product announcement or general news but describes a specific incident where the AI system's behavior led to harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

陸機器人一腳踢中「要害」! 工程師痛到跪地

2025-12-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The robot's behavior is controlled by AI algorithms that imitate human movements, and the incident where it kicked the engineer causing pain is a direct harm resulting from the AI system's malfunction or misoperation. This fits the definition of an AI Incident as it involves injury to a person caused by the use of an AI system. The article also discusses the AI system's capabilities and achievements in a different robot (the badminton robot), but no harm is described there, so the main incident is the injury caused by the boxing training robot.
Thumbnail Image

「連摺衣服都還學不會!」人形機器人峰會集體潑冷水,多位CEO坦言現在仍是「昂貴玩具」

2025-12-29
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in humanoid robots and discusses their development and use challenges. However, it does not report any realized harm or incident caused by these AI systems, nor does it describe a specific event where harm was narrowly avoided or a credible future harm risk is detailed. Instead, it provides expert opinions and industry insights that temper expectations and highlight current technological and economic barriers. This aligns with the definition of Complementary Information, as it enhances understanding of the AI ecosystem and ongoing assessments without reporting a new incident or hazard.
Thumbnail Image

中國AI機器人造反啦! 踢爆工程師下體「破蛋」

2025-12-30
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The robot is explicitly described as AI-powered, using advanced learning algorithms to perform tasks. The incident involves the robot unexpectedly kicking an engineer, causing physical injury, which is a direct harm to a person. This harm resulted from the AI system's behavior during its use or testing, thus meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system's malfunction or unexpected behavior.
Thumbnail Image

影》馬斯克看大陸機器人「笑哭」了!誤傷主人私處爆紅

2025-12-29
中時新聞網
Why's our monitor labelling this an incident or hazard?
The robot's AI system malfunctioned in distance measurement, causing it to accidentally kick a person, resulting in immediate physical pain (harm to a person). Although the harm was minor and no serious injury occurred, the AI system's malfunction directly led to this harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction during its use.
Thumbnail Image

人型機器人壓力測試,中國重兵部署邊境城市

2025-12-29
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
While the humanoid robots are AI systems deployed in sensitive border and industrial environments, the article does not report any realized harm or incidents resulting from their use. The discussion of safety concerns and technical challenges indicates potential risks but does not describe any event where harm occurred or was narrowly avoided. Therefore, this event represents a plausible future risk scenario related to AI system deployment but without any current incident. However, since the article mainly reports on the deployment and strategic development without emphasizing credible imminent risks or warnings of harm, it is best classified as Complementary Information providing context on AI system deployment and challenges rather than an AI Hazard or Incident.
Thumbnail Image

中國工程師慘遭機器人踢中「小弟弟」 影片網瘋傳!竟釣出馬斯克留下「1符號」 - 民視新聞網

2025-12-28
民視新聞網
Why's our monitor labelling this an incident or hazard?
The humanoid robot uses AI to mimic human movements, and its action directly caused physical harm to the engineer. The incident is a clear example of an AI system malfunction or lack of safety measures leading to injury, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to a person. The involvement of the AI system is explicit and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

陸機器人一腳踢中「要害」! 工程師痛到跪地│TVBS新聞網

2025-12-28
TVBS
Why's our monitor labelling this an incident or hazard?
The robot involved is an AI system as it learns and mimics human punching motions using control algorithms and real-time sensing. The incident resulted in direct physical injury to a person, fulfilling the criteria for harm to health. The injury was caused by the robot's unintended action during its operation, which qualifies as a malfunction or misuse of the AI system. Therefore, this event is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

宇樹機器人誤踹男工程師「重點部位」 馬斯克「笑哭」了 | 聯合新聞網

2025-12-30
UDN
Why's our monitor labelling this an incident or hazard?
The humanoid robot is an AI system performing complex motion synchronization tasks. The incident involved the robot's malfunction or error in movement control, which directly caused physical injury to the engineer. This fits the definition of an AI Incident as the AI system's malfunction directly led to harm to a person. The article also mentions the event as an accident during testing, confirming the harm occurred. The rest of the article about the retail store opening and cooperation with JD.com is unrelated to harm and does not affect the classification.
Thumbnail Image

中國機器人腳踢訓練員 馬斯克發爆笑表情包 | 宇樹科技 | 人形機器人 | 失控 | 大紀元

2025-12-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered humanoid robots that have malfunctioned or lost control, causing physical harm to people (injury to trainers and audience members). The robots' behavior is controlled by AI systems that infer and mimic human movements, and their failure to act safely or their erratic actions have directly led to harm. The security vulnerabilities further exacerbate the risk of harm by enabling malicious control. These factors meet the criteria for AI Incidents because the AI systems' malfunction and use have directly caused injury and safety hazards, fulfilling the harm criteria (a) injury or harm to persons. The article reports actual incidents, not just potential risks, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

中國AI機器人「造反」腳踢訓練員 引發熱議 | 中國機器人 | 宇樹機器人 | 新唐人电视台

2025-12-30
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The robot is an AI system as it is performing motion simulation and mimicking human actions, indicating autonomous or semi-autonomous AI control. The incident occurred during use/testing of the AI system, where the robot's action directly caused physical injury to the trainer. This meets the definition of an AI Incident because the AI system's use directly led to harm to a person. The event is not merely a hazard or complementary information, as actual harm occurred. The humorous reactions and references to Asimov's laws do not negate the fact that harm was caused by the AI system's behavior.
Thumbnail Image

中國研發人型機器人! 工程師遭「狠踢」痛到蹲地 | 科技 | 三立新聞網 SETN.COM

2025-12-30
三立新聞
Why's our monitor labelling this an incident or hazard?
The robot is an AI system capable of mimicking human movements in real time, indicating AI involvement in its control and operation. The incident involved the use of the AI system during testing, and the robot's action directly caused physical harm to the engineer, fulfilling the criteria for injury to a person. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or unintended action during use.
Thumbnail Image

具身智能,國産機器人為美好生活賦能

2025-12-31
big5.news.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the state and progress of AI-powered humanoid robots, their applications, and industry dynamics without mentioning any realized harm or direct risk of harm. There is no indication of injury, rights violations, disruption, or other harms caused or plausibly caused by these AI systems. The content aligns with providing context, updates, and insights into AI system development and deployment, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

2026年10大科技趨勢:機器人真的走進客廳了!你的金飯碗會被AI搶走?

2025-12-31
數位時代
Why's our monitor labelling this an incident or hazard?
The article mainly forecasts technological trends and potential future impacts of AI without reporting any concrete event where AI has directly or indirectly caused harm. While it mentions plausible risks such as AI-enabled scams, deepfakes, and cybersecurity threats, these are presented as emerging challenges rather than actual incidents. The discussion about AI replacing jobs or privacy concerns is speculative and forward-looking. Therefore, the content fits best as Complementary Information, providing context and insight into the evolving AI ecosystem and its societal implications, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Elon Musk laughs as rival Chinese humanoid robot kicks its teleoperator - VnExpress International

2025-12-30
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The Unitree G1 humanoid robot is an AI system that uses motion capture data to replicate human movements. The incident involved the robot unintentionally kicking the operator, causing physical injury. This harm directly resulted from the AI system's malfunction (technical delay causing mistimed movement). Therefore, this qualifies as an AI Incident due to injury caused by the AI system's malfunction during its use.