Voice Assistants' False Triggers Lead to Widespread Privacy Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers found over 1,000 words and phrases that inadvertently activate AI voice assistants like Alexa, Siri, Google Assistant, and Cortana, causing them to record and transmit private conversations without user consent. These recordings are sometimes reviewed by company employees, resulting in significant privacy breaches due to AI system malfunctions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (voice assistants) that use AI for speech recognition and activation. The incorrect activation and subsequent recording of private conversations directly lead to privacy violations, which constitute a breach of fundamental rights. Therefore, this is an AI Incident because the AI system's malfunction (false activation) has directly led to harm (privacy intrusion and potential rights violations).[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

WARNING: These 1,000 Phrases Can Incorrectly Activate Siri, Alexa, and Google Assistant: Privacy Intrusion Might Happen

2020-07-01
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) that use AI for speech recognition and activation. The incorrect activation and subsequent recording of private conversations directly lead to privacy violations, which constitute a breach of fundamental rights. Therefore, this is an AI Incident because the AI system's malfunction (false activation) has directly led to harm (privacy intrusion and potential rights violations).
Thumbnail Image

Beware of these words which could trigger your Apple Siri, Google Assistant, Amazon Alexa, and Microsoft Cortana - Cybersecurity Insiders

2020-07-02
Cybersecurity Insiders
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (virtual assistants) whose malfunction or unintended activation leads to privacy harm by recording and transmitting conversations without user consent. This constitutes a violation of user privacy rights, which falls under harm to individuals. Since the harm is occurring due to the AI system's use and malfunction, this qualifies as an AI Incident.
Thumbnail Image

IKANGAI | Blog | Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant | Ars Technica

2020-07-01
Ars Technica
Why's our monitor labelling this an incident or hazard?
The voice assistants mentioned are AI systems that process natural language to respond to user commands. The incorrect triggering causes these AI systems to record private conversations without user intent, which constitutes a violation of privacy rights, a form of harm to individuals. Since the recordings are shared with manufacturers and accessed by employees, this represents a breach of obligations intended to protect fundamental rights. Therefore, this event involves the use and malfunction of AI systems leading to realized harm, qualifying it as an AI Incident.
Thumbnail Image

Election, Montana, Hey Jerry: 1,000 Words That Trigger Alexa, Siri, Google, Cortana

2020-07-02
News18
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (voice-activated AI assistants with speech recognition and response capabilities). The event concerns the use and design of these AI systems, specifically their sensitivity to trigger words. While the study identifies a design that could lead to privacy breaches (harm to individuals' privacy), no actual harm or incident is reported as having occurred. The companies have taken steps to mitigate related risks (no longer using human contractors for audio review). Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to privacy harms but does not describe a realized AI Incident. It is not merely complementary information because the main focus is on the potential risk identified by the study, not on responses or updates to past incidents.
Thumbnail Image

You Should Mute Your Smart Speaker's Mic More Often

2020-07-02
Lifehacker
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) whose malfunction (false activations) directly leads to privacy harms, a violation of user rights and data protection laws. The article describes realized harms from the AI systems' use, including unauthorized recording and potential misuse of personal data. Therefore, this qualifies as an AI Incident due to direct harm to individuals' privacy and rights caused by the AI systems' malfunction and use.
Thumbnail Image

Tired of saying 'Hey Google' and 'Alexa'? Change it up with these unintentional wake words

2020-07-02
Fast Company
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) whose use leads to privacy harms through unintended recording of private moments, which is a violation of privacy rights (a human rights concern). Since the AI systems' malfunction (false wake word detection) directly leads to this harm, it qualifies as an AI Incident under the framework.
Thumbnail Image

You Should Mute Your Smart Speaker's Mic More Often

2020-07-03
Lifehacker Australia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) whose malfunction (false activations) directly leads to unauthorized recording and potential privacy breaches, which are harms to human rights and privacy. The article details actual occurrences and known practices of recording and human review of audio data, indicating realized harm rather than just potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Siri activated by 'a city' or 'OK, Jerry' reveals study of false wake words

2020-07-02
Cult of Mac
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) and their malfunction or design choice (forgiving wake word detection) that can lead to unintended activation and privacy risks. While the article describes a plausible risk of privacy harm due to accidental activation and data transmission, it does not document a specific realized harm or incident. Therefore, it fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to privacy harms, but no concrete incident is reported here.
Thumbnail Image

1,000 False Wakewords: A Letter! Buy 200 Toilet Rolls - Security Boulevard

2020-07-02
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants with wake word detection AI) that malfunction by falsely triggering on similar-sounding phrases, causing private audio to be recorded and sent to companies where employees listen to them. This directly leads to violations of privacy rights, a breach of fundamental rights protected by law. The article documents that this is an ongoing issue, not just a potential risk, and that harm is realized. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI malfunction and use.
Thumbnail Image

When speech assistants listen even though they shouldn't

2020-07-03
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) whose malfunction (false activations) directly leads to harm in the form of privacy violations, which constitute a breach of fundamental rights. The recording and human transcription of private conversations without user intent or consent is a clear violation of privacy rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI systems' malfunction and use.
Thumbnail Image

Researchers compile list of 1,000 words that accidentally trigger Alexa, Siri, and Google Assistant

2020-07-02
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (virtual assistants with speech recognition AI) whose malfunction (inadvertent activation by many words) leads to unintended recording and transmission of private conversations. This directly causes harm to user privacy, a violation of human rights and legal protections. The harm is realized, not just potential, as recordings have been made and analyzed by company workers. Hence, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and privacy (c).
Thumbnail Image

Uncovered: 1,000 Phrases That Incorrectly Trigger Alexa, Siri, and Google Assistant (Slashdot)

2020-07-01
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice assistants) whose malfunction (false triggering) leads to privacy violations, a breach of fundamental rights. The unintended activation and recording of private conversations constitute a violation of privacy rights, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this is an AI Incident because the AI system's malfunction directly leads to harm (privacy violation).
Thumbnail Image

Daiquiri oder Am Sonntag: Wenn ihr bestimmte Wörter sagt, hört euer Sprachassistent heimlich mit

2020-07-03
Focus
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (smart speakers with voice assistants) that use AI for speech recognition and activation. The false activations and unintended recordings directly lead to privacy violations, a breach of fundamental rights protected by law. The involvement of humans listening to these recordings further confirms the privacy harm. The harm is realized (not just potential), as private conversations including sensitive topics have been overheard. Hence, this qualifies as an AI Incident under the definition of violations of human rights due to AI system malfunction and use.
Thumbnail Image

Smart Speaker: Intime Momente mitgehört

2020-06-30
PRESSEPORTAL
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (smart speaker voice assistants) that use AI for activation word detection and command processing. The malfunction or design issues cause these systems to record private conversations without user consent, directly leading to privacy violations and breaches of fundamental rights. The harm is realized, as intimate moments have been overheard and recorded. This fits the definition of an AI Incident because the AI system's malfunction has directly led to harm (violation of privacy and rights).
Thumbnail Image

"Sex, Streit, Arztgespräche: wie oft Smart Speaker heimlich mithören" - DIGITAL FERNSEHEN

2020-07-01
DIGITAL FERNSEHEN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (smart speaker voice assistants) that have malfunctioned by unintentionally activating and recording private conversations without user consent. This has directly led to harm in the form of privacy violations and breaches of fundamental rights. The involvement of human reviewers listening to these recordings further confirms the harm. The event is not merely a potential risk but a realized incident with documented harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Smart Speaker: Intime Momente mitgehört

2020-07-01
Mimikama
Why's our monitor labelling this an incident or hazard?
The smart speakers use AI-based voice recognition systems to detect activation words and process commands. The article details that these AI systems frequently activate erroneously, recording private conversations without user intent. This malfunction directly results in harm to individuals' privacy and potentially breaches data protection and human rights laws. The involvement of AI in the malfunction and the resulting privacy violations qualifies this as an AI Incident under the framework, as the harm (privacy violations) has occurred and is directly linked to the AI system's malfunction.
Thumbnail Image

"Am Sonntag" statt "Alexa": Studie zeigt, wie hellhörig Smart Speaker wirklich sind

2020-07-01
Weser Kurier
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (smart speakers with voice-activated AI assistants) whose malfunction (false activations) leads to unintended recording of private conversations, which constitutes a violation of privacy rights, a form of harm to individuals. The involvement of AI in the malfunction and the resulting privacy harm qualifies this as an AI Incident under the framework, as the AI system's malfunction directly leads to harm (privacy violations).
Thumbnail Image

Smart Speaker: Intime Momente mitgehört

2020-06-30
firmenpresse.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (smart speakers with voice recognition AI) that malfunction by activating without explicit commands, leading to the recording and human review of private conversations without proper consent. This constitutes a violation of privacy and fundamental rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is direct and realized, as private conversations have been overheard and reviewed, causing harm to individuals' privacy and trust. Therefore, this is classified as an AI Incident.