Eerie Unsolicited Alexa Interactions Spark Privacy Fears

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Several Amazon Alexa users, via TikTok and Reddit, reported unexplained midnight interactions: whispered dog commands, unsolicited conversations with family members, and mysterious "goodbye" voice recordings. Alarmed by these AI malfunctions and privacy concerns, many have unplugged their devices, underscoring potential security flaws and user distress.[AI generated]

Why's our monitor labelling this an incident or hazard?

Amazon Alexa is an AI system that processes voice inputs and generates responses. The reported unsolicited and unexplained interactions indicate a malfunction or misuse of the AI system, directly causing distress and concern among users. This constitutes an AI Incident as the AI system's malfunction has directly led to harm (psychological discomfort and privacy concerns) to users. The presence of unrecognized voice recordings and unsolicited conversations further supports the classification as an incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityHuman wellbeingSafetyRespect of human rights

Industries
Consumer servicesConsumer productsDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Amazon Alexa's Spooky Interactions Spark Concern Among Users ~ My Mobile India

2024-01-01
My Mobile
Why's our monitor labelling this an incident or hazard?
Amazon Alexa is an AI system that processes voice inputs and generates responses. The reported unsolicited and unexplained interactions indicate a malfunction or misuse of the AI system, directly causing distress and concern among users. This constitutes an AI Incident as the AI system's malfunction has directly led to harm (psychological discomfort and privacy concerns) to users. The presence of unrecognized voice recordings and unsolicited conversations further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

"Creepy" Alexa scares couple into unplugging the digital assistant

2023-12-30
Phone Arena
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Alexa) whose malfunction or unexpected behavior (random laughing and a creepy conversation) caused psychological discomfort or fear to users. While no physical harm is reported, the incident caused harm to the users' sense of safety and trust, which can be considered harm to persons indirectly. The AI system's malfunction led to this harm, qualifying it as an AI Incident.
Thumbnail Image

Woman Throws Out Alexa After It Starts Talking To Her Husband Without Any Prompts

2023-12-31
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The Amazon Alexa is an AI system capable of understanding and generating speech based on user prompts. The device talking without prompts suggests a malfunction or unintended AI behavior. While this raises privacy and autonomy concerns, the article does not report any direct or indirect harm such as physical injury, property damage, or legal rights violations. The event plausibly could lead to harms like privacy breaches or psychological distress, but these are not explicitly stated as having occurred. Hence, it fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to harm, but no harm has been confirmed.
Thumbnail Image

Woman Dumps Alexa After Device Gets Too Close To Husband

2023-12-30
NDTV
Why's our monitor labelling this an incident or hazard?
Alexa is an AI-powered virtual assistant capable of understanding and generating human language. The device's unsolicited late-night conversations represent a malfunction or unintended behavior of the AI system. This behavior has directly led to harm in the form of privacy violation and emotional distress to the users, as evidenced by the decision to remove the device from the home. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's malfunction.
Thumbnail Image

Couple ditches Amazon Alexa -- after 'creepy' chats with husband

2023-12-29
New York Post
Why's our monitor labelling this an incident or hazard?
Amazon Alexa is an AI system designed to respond to voice commands and interact with users. The described incidents involve Alexa speaking without prompts, delivering unsettling messages, and overriding user commands, which are malfunctions or unintended behaviors of the AI system. These behaviors have caused psychological harm and distress to users, fulfilling the criteria for harm to persons. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunctioning behavior.
Thumbnail Image

Woman Removes Alexa After It Started Talking to Her Husband Without Prompts - News18

2023-12-30
News18
Why's our monitor labelling this an incident or hazard?
The Alexa device is an AI system capable of autonomous voice interaction. The event involves the AI system's unexpected use (speaking without prompts), which caused suspicion and privacy concerns. However, no direct or indirect harm as defined (injury, rights violation, property/community/environmental harm) is reported. The event illustrates a plausible risk of malfunction or privacy intrusion but does not document realized harm. Therefore, it qualifies as an AI Hazard, as the AI system's malfunction could plausibly lead to harm (e.g., privacy breaches or distress), but no harm has yet occurred.
Thumbnail Image

Couple Ditches Alexa After The Device Started Talking To Husband At Night

2023-12-31
Nairaland
Why's our monitor labelling this an incident or hazard?
Alexa is an AI system that processes voice inputs and generates spoken outputs. The described incidents involve the AI system malfunctioning or activating without user prompts, producing unsettling or harmful speech. This directly caused psychological harm or distress to users, fulfilling the criteria for an AI Incident under harm to health. The repeated unsolicited and disturbing messages, including a suicide-related statement, demonstrate a clear link between the AI system's malfunction and harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Woman throws out Alexa speaker for attempting to speak to her husband

2023-12-29
indy100.com
Why's our monitor labelling this an incident or hazard?
The Alexa speaker is an AI system as it uses voice recognition and natural language processing to interact with users. The event involves the use of the AI system (Alexa) malfunctioning or behaving unexpectedly by activating and speaking without prompt. However, there is no indication of any direct or indirect harm such as injury, violation of rights, or disruption of critical infrastructure. The discomfort and fear experienced by the users, while notable, do not rise to the level of significant harm as defined in the framework. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information since it provides anecdotal context about AI system behavior and public reactions but does not describe a harm or credible risk of harm.
Thumbnail Image

Couple ditches Amazon Alexa after the device started talking to her husband in the middle of the night

2023-12-30
End Time Headlines
Why's our monitor labelling this an incident or hazard?
Alexa is an AI system designed to respond to voice commands and interact with users. The described incidents involve the AI system malfunctioning by speaking without prompts, delivering inappropriate or disturbing messages, and performing unintended actions. These malfunctions have caused emotional distress and fear among users, constituting harm to persons. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction.