Robot Vacuum Hacked to Hurl Racial Slurs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In the US, a man's Ecovacs robot vacuum was hacked to insult him with racial slurs. This incident highlights security vulnerabilities in smart devices, making them susceptible to hacking. Despite reporting the issue, the company dismissed it as credential stuffing, a common problem with inadequate security measures in such devices.[AI generated]

Why's our monitor labelling this an incident or hazard?

The robot vacuum is an AI-driven system (autonomous navigation, voice output) whose security flaws were exploited by hackers. Its malicious output (racial insults) constitutes realized harm—verbal harassment and violation of the user’s rights—directly resulting from the AI system’s use/malfunction due to hacking.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governanceAccountabilitySafetyRespect of human rightsTransparency & explainabilityHuman wellbeing

Industries
Consumer productsDigital securityRobots, sensors, and IT hardware

Affected stakeholders
Consumers

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident


Articles about this incident or hazard

Thumbnail Image

negli stati uniti un uomo è stato insultato dal suo robot aspirapolvere, che gli ripeteva: 'negro'

2025-01-22
dagospia.com
Why's our monitor labelling this an incident or hazard?
The robot vacuum is an AI-driven system (autonomous navigation, voice output) whose security flaws were exploited by hackers. Its malicious output (racial insults) constitutes realized harm—verbal harassment and violation of the user’s rights—directly resulting from the AI system’s use/malfunction due to hacking.
Thumbnail Image

c'è un nemico nell'aspirapolvere. che fare? sul caso del robot hackerato che ha urlato 'negro...

2025-01-24
dagospia.com
Why's our monitor labelling this an incident or hazard?
These are real harms directly resulting from the malicious use of AI-enabled home robots: violations of privacy (unauthorized surveillance and photo capture) and emotional harm through racist insults. The incidents stem from exploitation of security flaws in the AI systems, satisfying the definition of an AI Incident.
Thumbnail Image

Usa, robot hackerati: aspirapolvere intelligente insulta i proprietari. Ecco come proteggersi

2025-01-23
lastampa.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (smart robotic vacuum cleaners with autonomous navigation and voice capabilities) that have been compromised due to security flaws, leading to direct harm such as privacy breaches (unauthorized photos) and verbal harassment (racist insults). The harm is realized and ongoing, not merely potential. The AI system's malfunction or misuse (via hacking) directly leads to violations of privacy and emotional harm, which are harms to persons and their rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Usa, robot hackerati: aspirapolvere intelligente insulta i proprietari

2025-01-21
Sky
Why's our monitor labelling this an incident or hazard?
The smart vacuum cleaner qualifies as an AI system due to its autonomous operation and intelligent navigation. The hacking incident represents misuse of the AI system, leading to direct harm (emotional harm) to the user. Since the AI system's compromised use caused harm, this event qualifies as an AI Incident.
Thumbnail Image

I robot aspirapolvere hanno iniziato a insultare i loro proprietari: "Mi ha chiamato neg*o"

2025-01-20
Fanpage
Why's our monitor labelling this an incident or hazard?
The robot vacuum cleaners are AI systems with autonomous navigation and voice output capabilities. The hacking and unauthorized control of these AI systems have directly caused harm by insulting owners and spying on them, violating privacy and potentially causing psychological harm. The article provides concrete examples of these harms occurring, not just potential risks. The involvement of AI system vulnerabilities and their exploitation by hackers meets the criteria for an AI Incident, as the harms are realized and directly linked to the AI systems' malfunction and misuse.