US Lawmakers Warn of Backdoor Threat in Chinese Unitree Robots

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A US congressional committee urged investigation and sanctions against Chinese robotics firm Unitree over hidden “CloudSail” backdoors in its quadruped robots used by US prisons, police and military, warning they could be remotely controlled to exfiltrate data and facilitate espionage, posing significant national security risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (robots developed by Unitree Robotics) that is used by critical U.S. institutions and has a hidden remote access channel transmitting data to China. This unauthorized data transmission constitutes a breach of security and potentially harms U.S. government operations and infrastructure. The AI system's use has directly led to a violation of security and trust, which fits the definition of an AI Incident as it causes harm (d) to property, communities, or the environment (in this case, national security and operational integrity). The presence of the AI system and its misuse is explicit, and the harm is realized, not just potential.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilitySafetyDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
Government

Harm types
Public interestHuman or fundamental rightsReputationalEconomic/Property

Severity
AI incident

Business function:
ICT management and information securityMonitoring and quality control

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

宇樹科技機器人 資料偷傳回中國 - 國際 - 自由時報電子報

2025-05-07
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (robots developed by Unitree Robotics) that is used by critical U.S. institutions and has a hidden remote access channel transmitting data to China. This unauthorized data transmission constitutes a breach of security and potentially harms U.S. government operations and infrastructure. The AI system's use has directly led to a violation of security and trust, which fits the definition of an AI Incident as it causes harm (d) to property, communities, or the environment (in this case, national security and operational integrity). The presence of the AI system and its misuse is explicit, and the harm is realized, not just potential.
Thumbnail Image

美軍警使用中國宇樹機器人 議員指涉及解放軍籲調查 | 國際焦點 | 國際 | 經濟日報

2025-05-07
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous or semi-autonomous robots with advanced capabilities, including remote data transmission and potential for remote control by unauthorized actors. The use of these AI-enabled robots by U.S. military and law enforcement, combined with the presence of backdoors and ties to the Chinese military, presents direct risks of harm to national security and potentially to individuals through surveillance or misuse. The article reports ongoing use and associated risks, indicating realized security harms or at least direct threats. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems leading to security and privacy harms, as well as potential violations of rights and national security risks.
Thumbnail Image

美國會議員促調查中國機器人公司宇樹 | 機器狗 | 軍事演習 | 大紀元

2025-05-06
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled robots (quadruped robots with cameras and remote control capabilities) used in critical US infrastructure (prisons, police, military). The presence of a hidden backdoor allowing remote access by foreign actors directly threatens national security and the operation of critical infrastructure, fulfilling the criteria for harm under (b) Disruption of critical infrastructure. The involvement of AI in these robots is clear given their autonomous or semi-autonomous capabilities. The event is not merely a potential risk but an ongoing situation with realized security threats, making it an AI Incident rather than a hazard or complementary information. The call for investigation and blacklisting further supports the seriousness of the incident.
Thumbnail Image

美軍警購宇樹機械人 議員憂致國安風險籲制裁 - 20250508 - 中國

2025-05-07
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (robots with military and surveillance capabilities) whose development and use raise plausible national security risks. The congressional committee's concerns and calls for sanctions reflect a credible potential for harm, especially given the military applications and cross-border use. No direct harm or incident is reported, so this is best classified as an AI Hazard, as the event describes plausible future risks stemming from the AI systems' deployment and dual-use nature. It is not Complementary Information because the main focus is on the risk and regulatory response, not on updates or responses to a past incident. It is not an AI Incident because no actual harm or violation has been reported yet.
Thumbnail Image

美軍警使用中國宇樹機器人 議員指涉及解放軍籲調查 | 國際 | 中央社 CNA

2025-05-07
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of advanced autonomous robots with military and surveillance capabilities. The use of these robots by U.S. military and law enforcement, combined with the presence of backdoor software transmitting data to China, poses direct risks of harm to national security and potentially to individuals through surveillance and control. The event reports realized use and associated risks, indicating direct or indirect harm related to AI system use. Therefore, it qualifies as an AI Incident due to the realized harms and security breaches linked to the AI systems.
Thumbnail Image

美軍警用宇樹機械人 議員稱與解放軍有關籲調查

2025-05-07
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (robots with advanced capabilities and remote access software) whose use and development raise significant concerns about data security, espionage, and military applications. The congressional committee's call for investigation and potential listing on restricted entity lists indicates recognition of plausible future harm. No actual harm or incident is reported as having occurred yet, only the potential for harm through unauthorized data transmission and espionage. Therefore, this qualifies as an AI Hazard, as the development and use of these AI-enabled robots could plausibly lead to harms such as violations of privacy, national security risks, and misuse by foreign military entities.
Thumbnail Image

美軍警使用中國宇樹機器人 議員指涉及解放軍籲調查

2025-05-07
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Yushi's autonomous robots with surveillance and military capabilities) whose use by U.S. agencies has raised concerns about data security, unauthorized remote access (backdoors), and potential hacking leading to espionage or military harm. Although no direct harm is reported yet, the described risks plausibly could lead to significant harms including national security breaches and military disruptions. The congressional call for investigation and listing reflects recognition of these plausible future harms. Therefore, this qualifies as an AI Hazard due to the credible risk of harm from the AI systems' development and use, but not an AI Incident since no realized harm is reported.
Thumbnail Image

美軍警用宇樹機器人 數據送中| 台灣大紀元

2025-05-07
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article details a credible national security threat stemming from the use of AI-enabled robots with hidden backdoors that could be remotely controlled by a foreign adversary. Although no direct harm has occurred, the potential for espionage, unauthorized data access, and disruption of critical infrastructure (military, police, prisons) is significant. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving harm to critical infrastructure and national security. The article does not report an actual incident of harm but warns of plausible future harm and calls for preventive measures.