Unitree Robot Bluetooth Flaw Exposes Thousands to Remote Takeover

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Security researchers revealed a critical Bluetooth vulnerability in Unitree's AI-powered robots, allowing attackers to remotely control and infect large numbers of devices. The flaw enables self-propagating attacks, risking data theft and unauthorized robot control, with potential impacts on public safety, especially where these robots are deployed in public services.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly identifies Unitree's robots as AI systems (quadruped and humanoid robots) with autonomous functions. The Bluetooth vulnerability allows attackers to gain control over these AI systems, enabling self-propagating malware infections that can compromise many robots. This leads to direct harm through unauthorized control, data theft, and potential misuse of the robots, which can affect property, privacy, and public safety. The harm is realized as the vulnerability is confirmed and the risk is active, with some affected users already deploying these robots. The event involves the use and malfunction (security failure) of AI systems leading to harm, fitting the definition of an AI Incident.[AI generated]
AI principles
Robustness & digital securitySafetyAccountabilityPrivacy & data governance

Industries
Robots, sensors, and IT hardwareGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsEconomic/PropertyPublic interest

Severity
AI incident

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

宇樹科技爆藍牙漏洞 大量機器人恐被「感染」控制

2025-09-26
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies Unitree's robots as AI systems (quadruped and humanoid robots) with autonomous functions. The Bluetooth vulnerability allows attackers to gain control over these AI systems, enabling self-propagating malware infections that can compromise many robots. This leads to direct harm through unauthorized control, data theft, and potential misuse of the robots, which can affect property, privacy, and public safety. The harm is realized as the vulnerability is confirmed and the risk is active, with some affected users already deploying these robots. The event involves the use and malfunction (security failure) of AI systems leading to harm, fitting the definition of an AI Incident.
Thumbnail Image

曾沾「北市中國機器狗」爭議 宇樹科技爆藍牙漏洞大量機器人恐被「感染」控制 - 政治 - 自由時報電子報

2025-09-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—Unitree's robots with autonomous capabilities and telemetry data—and details a security vulnerability that allows attackers to control these robots remotely. This vulnerability has already been demonstrated to allow self-spreading attacks, which could lead to large-scale control of robotic devices. The potential harms include unauthorized control of robots used in public spaces and law enforcement, risking physical harm, disruption, and privacy violations. Since the vulnerability is active and the risk is imminent and significant, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

宇樹科技爆藍牙漏洞 大量機器人恐被「感染」控制 | 科技 | 中央社 CNA

2025-09-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-enabled robots with autonomous capabilities as affected by a Bluetooth vulnerability that allows attackers to control them remotely and spread the infection to other robots. The vulnerability has been confirmed by security researchers and affects multiple robot models. The potential harms include unauthorized control, data theft, and the creation of a botnet, which can disrupt operations and pose risks to public safety. The involvement of AI systems is clear, as these robots rely on AI for operation and decision-making. The harm is direct and materialized in the form of security breaches and control loss, meeting the criteria for an AI Incident.
Thumbnail Image

宇樹科技爆藍牙漏洞 大量機器人恐被「感染」控制 - Rti央廣

2025-09-26
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—Unitree's autonomous robots—and details a security flaw that allows attackers to control these AI systems remotely. The vulnerability is self-propagating and can create a botnet of compromised robots, which is a direct harm scenario. The harm includes potential loss of control over physical robots, data theft, and risks to public safety, especially since police forces are testing these robots. The AI system's malfunction (security vulnerability) directly leads to these harms. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

宇樹科技爆藍牙漏洞,大量機器人恐被「感染」控制

2025-09-26
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Unitree's autonomous robots with AI capabilities) and describes a security flaw that allows attackers to control these robots remotely and propagate the attack to other units. This constitutes a malfunction or misuse of the AI system leading to direct harm risks, including unauthorized control and data theft. The presence of deployed robots in public service contexts heightens the potential for actual harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's vulnerability and the potential for significant harm to property, data, and possibly public safety.
Thumbnail Image

宇樹機器人被爆有安全漏洞 恐集體遭控制 | 藍牙 | 殭屍網絡 | 大紀元

2025-09-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems embedded in Unitree's robots, which are compromised through a software vulnerability allowing remote control and data exfiltration. The resulting botnet and unauthorized surveillance represent direct harms to property, communities, and potentially individuals' privacy and security rights. The use and malfunction of the AI-enabled robots have directly led to these harms or risks thereof. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

宇树机器人被爆有安全漏洞 恐集体遭控制 | 蓝牙 | 僵尸网络 | 大纪元

2025-09-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The robots are AI systems with autonomous or semi-autonomous capabilities. The security vulnerability allows attackers to take control of these AI systems, leading to direct harm through unauthorized surveillance (privacy violations) and potential malicious use of the robots (harm to property or communities). The event reports realized harm and confirmed exploitation potential, not just a theoretical risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

宇树科技多款机器人曝UniPwn安全漏洞

2025-09-30
中关村在线
Why's our monitor labelling this an incident or hazard?
The robots involved are AI systems as they are autonomous robotic devices. The security vulnerability allows attackers to remotely control these AI systems, which could lead to harm including unauthorized actions by the robots, privacy violations, or physical damage. The worm-like propagation increases the risk of widespread impact. Since the article does not report actual harm occurring yet but highlights a serious security flaw with plausible future harm, this event fits the definition of an AI Hazard rather than an AI Incident. The company's response and ongoing fixes are noted but do not change the classification since the main focus is on the vulnerability and its potential consequences.
Thumbnail Image

宇树科技机器人曝Wi-Fi连接漏洞 公司紧急修复并优化安全机制

2025-09-30
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (robots with autonomous capabilities and network connectivity). The security vulnerability is a malfunction or flaw in the AI system's use that could plausibly lead to harm (e.g., unauthorized control). Since no actual harm is reported, but the risk is credible and the company is responding, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential risk and the company's mitigation efforts, not on realized harm.
Thumbnail Image

沪深京三市成交额突破2万亿-36氪

2025-09-30
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The robots mentioned are AI-enabled systems (robotics with network connectivity and configuration interfaces likely involving AI components). The vulnerability could plausibly lead to an AI Incident if exploited, as attackers gaining control could cause harm. However, the article reports the vulnerability discovery and ongoing remediation without any realized harm yet. Therefore, this is an AI Hazard, as the vulnerability could plausibly lead to harm but no harm has been reported so far.
Thumbnail Image

宇树机器人被曝漏洞,机器人之间可相互感染,官方火速回应

2025-09-30
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Unitree humanoid and quadruped robots with autonomous capabilities). The vulnerability allows attackers to take control of these AI systems, which directly leads to harms such as unauthorized remote control, data theft, and the formation of a botnet that can disrupt operations and potentially cause physical or virtual harm. The wormable nature of the vulnerability means the harm can propagate autonomously among AI systems. The company's delayed response and the public availability of exploit tools further exacerbate the risk and actual harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction and exploitation have directly led to significant harm.
Thumbnail Image

宇树科技回应机器人安全漏洞:已完成大部分修复,将推送更新_手机网易网

2025-09-30
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled robots with autonomous capabilities and network connectivity vulnerabilities that have been exploited or could be exploited to cause harm. The security flaws have led to unauthorized access and control, which directly threaten user privacy and safety, and could cause physical harm or property damage if robots are compromised. The company's response to fix the vulnerabilities does not negate the fact that harm has occurred or is ongoing. The detailed description of realized harms and the AI system's role in enabling these harms meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國宇樹機器人爆藍牙漏洞 可遭集體控制| 台灣大紀元

2025-10-01
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-enabled robots with a Bluetooth vulnerability that allows attackers to control them remotely, forming a botnet. This constitutes a direct harm scenario involving unauthorized control and data exfiltration, which can be considered harm to property and privacy rights (human rights). The AI system's malfunction (security flaw) directly leads to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

每5分鐘向中國發送1次數據!專家揭 :「這款」人形機器人有2大安全漏洞 - 自由財經

2025-10-03
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (Unitree's humanoid and quadruped robots) with autonomous capabilities and sensor data collection. The robots' automatic data transmission without consent constitutes a violation of privacy rights, a form of harm under the framework. The BLE vulnerability allows attackers to gain root access and control the robots, which can lead to further harm such as unauthorized surveillance, data theft, or physical misuse of robots. The malware's worm-like propagation exacerbates the risk, indicating a systemic security failure. Since these harms are occurring or have occurred, and the AI system's malfunction and use are central to the incident, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

人形机器人再爆资安漏洞 每5分钟传数据到中国 | 宇树科技 | 大纪元

2025-10-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-enabled robots with autonomous capabilities and sensor data collection transmitting data without consent, constituting a violation of privacy and security (harm to persons and communities). The vulnerability allows remote control and worm-like spread, increasing risk of misuse and harm. The harm is realized (data exfiltration) and the potential for physical harm is noted by experts. The AI system's malfunction and security flaws directly lead to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.