Critical Vulnerabilities in Hospital Robots Expose Patient Safety and Privacy Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers discovered five critical zero-day vulnerabilities in Aethon TUG autonomous hospital robots, allowing potential attackers to disrupt medication delivery, interfere with hospital operations, and access sensitive patient data. The flaws, which posed significant risks to health and privacy, were patched after coordinated disclosure and remediation efforts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous hospital robots) whose malfunction or misuse (via security vulnerabilities) could directly lead to harm such as privacy violations, disruption of hospital operations, and potential physical risks. While no incident of harm is reported, the credible risk of remote hijacking and misuse constitutes a plausible future harm. Therefore, this qualifies as an AI Hazard. The article also mentions remediation efforts, but the main focus is on the vulnerabilities and their potential risks rather than the remediation, so it is not Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityPrivacy & data governanceRespect of human rightsAccountability

Industries
Healthcare, drugs, and biotechnologyRobots, sensors, and IT hardwareDigital securityIT infrastructure and hostingLogistics, wholesale, and retail

Affected stakeholders
Consumers

Harm types
Physical (injury)Human or fundamental rightsPublic interestReputationalEconomic/Property

Severity
AI hazard

Business function:
Logistics

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

研究人员发现医院用自动机器人有被远程劫持的风险 - 安全 - cnBeta.COM

2022-04-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous hospital robots) whose malfunction or misuse (via security vulnerabilities) could directly lead to harm such as privacy violations, disruption of hospital operations, and potential physical risks. While no incident of harm is reported, the credible risk of remote hijacking and misuse constitutes a plausible future harm. Therefore, this qualifies as an AI Hazard. The article also mentions remediation efforts, but the main focus is on the vulnerabilities and their potential risks rather than the remediation, so it is not Complementary Information.
Thumbnail Image

醫療機器人系統 Tug 有嚴重漏洞,駭客可遠端控制

2022-04-15
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The Tug system is an AI-enabled medical robot used autonomously in hospitals. The reported security vulnerability (JekyllBot:5) could allow remote control by hackers, which could disrupt hospital operations and potentially harm patients or staff. Although no incident has occurred yet, the credible risk of such harm makes this an AI Hazard. The event does not describe actual harm or an incident but highlights a plausible threat from the AI system's malfunction or misuse. The cooperation between the security company and manufacturer to patch the vulnerability is a mitigating action but does not change the classification.
Thumbnail Image

醫院機器人存在可遠端竊密及劫持的漏洞

2022-04-13
iThome Online
Why's our monitor labelling this an incident or hazard?
The hospital robot system qualifies as an AI system due to its autonomous navigation and operational capabilities within hospital settings. The vulnerabilities discovered allow attackers to remotely hijack or manipulate the AI system, which could directly lead to harm to patients (e.g., interfering with medication delivery), disruption of critical hospital infrastructure (e.g., elevators, doors), and privacy violations (e.g., unauthorized video capture, data theft). Since the article does not report any realized harm but highlights the serious potential consequences and the need for urgent patching, this event fits the definition of an AI Hazard rather than an AI Incident. The timely patching mitigates the risk but does not negate the plausible future harm that these vulnerabilities could have caused if exploited.
Thumbnail Image

Hospital robots could 'wreak havoc' after being exposed to hackers

2022-04-13
The Independent
Why's our monitor labelling this an incident or hazard?
The Tug robots are AI systems used autonomously in hospitals to transport items. The discovered zero-day vulnerabilities in their software could allow hackers to control the robots remotely, access real-time camera feeds and user data, and cause physical harm by crashing into staff, visitors, or equipment. This represents a plausible risk of injury or harm to people (harm category a) and disruption to hospital operations (harm category b). Since the harm is potential but credible and serious, this event qualifies as an AI Hazard rather than an AI Incident, as no actual exploitation or harm has been reported yet.
Thumbnail Image

Hospital robots face attack by hackers after security flaws found, experts warn

2022-04-13
The Sun
Why's our monitor labelling this an incident or hazard?
The robots are AI systems due to their autonomous navigation and task execution in hospital settings. The security flaws represent a malfunction or vulnerability in their development and use. While no direct harm has occurred yet, the report details credible risks of unauthorized control and malware installation that could disrupt hospital operations and patient safety, constituting plausible future harm. The event does not describe an actual incident of harm but warns of a significant threat, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Autonomous robots used in hundreds of hospitals at risk of remote hijacks

2022-04-12
TechCrunch
Why's our monitor labelling this an incident or hazard?
The autonomous hospital robots qualify as AI systems due to their self-controlled, autonomous operation in transporting critical goods and navigating hospital environments. The vulnerabilities in the base servers controlling these robots represent malfunctions or security flaws in the AI system's use. The potential harms include unauthorized physical access to restricted areas, privacy violations through camera feeds, and disruption of hospital operations, all of which can directly or indirectly harm patients and hospital staff. Since some robots are internet-exposed and vulnerable to remote hijacking, the risk is immediate and significant. The event reports realized vulnerabilities and potential exploitation, constituting an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Some autonomous robots easily hijacked, report says

2022-04-12
Hospital Review
Why's our monitor labelling this an incident or hazard?
The robots are autonomous AI systems used in hospitals, and the vulnerabilities in their software could be exploited to cause harm. Since the report does not mention any realized harm but highlights the potential for unauthorized control and privacy violations, this situation constitutes an AI Hazard rather than an AI Incident. The fixes have been issued, but the extent of patching is unknown, so the risk remains plausible.
Thumbnail Image

Critical bug allows medical robot to be remotely controlled

2022-04-12
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous medical robots) whose vulnerabilities could be exploited to cause direct harm to patients and hospital operations, including disruption of critical infrastructure and potential injury or harm to health. The vulnerabilities allow remote control of the robots, access to sensitive data, and interference with hospital functions. Although no exploitation has occurred yet, the direct link between the AI system's malfunction (security flaws) and the potential for serious harm meets the criteria for an AI Incident. The event is not merely a hazard because the vulnerabilities exist and could be exploited, and the potential harms are clearly articulated and significant. It is not complementary information because the main focus is on the vulnerabilities and their potential for harm, not on responses or broader ecosystem context. It is not unrelated because the event clearly involves AI systems and their security.
Thumbnail Image

5 critical zero-days found in Aethon TUG smart robots used in global hospitals

2022-04-13
SC Media
Why's our monitor labelling this an incident or hazard?
The Aethon TUG robots are AI systems due to their autonomous operation and sensor-based navigation in hospital environments. The vulnerabilities directly relate to the security and control of these AI systems. While no actual harm has been reported, the potential for attackers to disrupt critical hospital infrastructure, access sensitive medical data, and hijack administrative sessions constitutes a plausible risk of harm. Therefore, this event qualifies as an AI Hazard because it describes credible vulnerabilities in AI systems that could plausibly lead to an AI Incident involving harm to health, disruption of critical infrastructure, and violation of privacy rights if exploited.
Thumbnail Image

Cynerio Discovers Vulnerabilities to Remotely Control Hospital Robots

2022-04-15
HIT Consultant Media
Why's our monitor labelling this an incident or hazard?
The Aethon TUG robots are AI systems as they autonomously navigate and perform healthcare tasks using sensors, cameras, and decision-making algorithms. The vulnerabilities allow attackers to remotely control these AI systems, potentially causing harm to patients by disrupting medication delivery, obstructing hospital operations, and violating privacy through surveillance. This constitutes direct harm or risk of harm to health and hospital infrastructure, fulfilling the criteria for an AI Incident. The mitigation efforts are complementary information but do not negate the incident classification since the vulnerabilities existed and posed real risks.
Thumbnail Image

Autonomous robots used in hundreds of hospitals at risk of remote hijacks (Zack Whittaker/TechCrunch)

2022-04-12
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event involves autonomous hospital robots, which are AI systems performing complex tasks autonomously. The article focuses on the risk of remote hijacking, a security vulnerability that could plausibly lead to harm such as disruption of hospital operations or harm to patients. Since no actual harm or incident is reported, this qualifies as an AI Hazard rather than an AI Incident. The presence of AI is reasonably inferred from the description of autonomous robots operating in hospital settings.
Thumbnail Image

Hospital hallway robots get patches for potentially serious bugs

2022-04-12
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (semi-autonomous hospital robots) whose vulnerabilities could directly disrupt patient care and compromise sensitive information, both of which constitute harm to health and hospital operations. The vulnerabilities were discovered and patched before exploitation, but the potential for serious harm was real and credible. The AI system's malfunction (security flaws) is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information, as the vulnerabilities represent a direct risk of harm linked to the AI system's use in a critical infrastructure environment (hospitals).
Thumbnail Image

Report: Zero-Day Flaws Pose Attack Risks to Hospital Robots

2022-04-14
GovInfoSecurity.com
Why's our monitor labelling this an incident or hazard?
The event involves autonomous hospital robots, which qualify as AI systems due to their autonomous operation and decision-making capabilities. The discovered vulnerabilities could allow attackers to take control of these robots, disrupt medication delivery and elevator operation (critical infrastructure), and breach patient privacy (violation of rights). These harms are direct and significant, fulfilling the criteria for an AI Incident. Although patches have been issued, the event reports actual vulnerabilities that could have led to harm, not just potential future harm. Therefore, this is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Five zero days affecting Aethon hospital autonomous robots patched

2022-04-12
The Record by Recorded Future
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Aethon TUG autonomous mobile robots) whose malfunction or exploitation could directly lead to harm, including disruption of critical hospital infrastructure and violation of privacy rights. While no actual harm was reported, the vulnerabilities posed a credible and serious risk of harm if exploited. Since the harm was not realized but could plausibly have occurred, this qualifies as an AI Hazard rather than an AI Incident. The coordinated patching and remediation efforts are part of the response but do not change the classification of the original event.