Study Uncovers Privacy Risks in Amazon Alexa's Third-Party Skills

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers found that Amazon Alexa's third-party skills, powered by AI, have significant privacy vulnerabilities. Flaws in Amazon's vetting process allow third-party developers to access users' personal data, change code post-approval, and potentially mislead users, resulting in unauthorized data access and privacy violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through Amazon Alexa's voice-activated assistant and its third-party skills, which are AI-powered programs enabling user interaction. The study documents direct privacy harms caused by these AI systems' vulnerabilities, such as unauthorized data access and misleading invocation phrases leading to potential phishing. The researchers demonstrated that developers can modify skill behavior post-approval to collect more data, indicating a malfunction or misuse of the AI system's development and use. These factors meet the criteria for an AI Incident as the AI system's use has directly led to violations of privacy rights and harm to users' personal information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rights

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Here's why it's important to audit your Amazon Alexa skills (and how to do it)

2021-03-05
The Verge
Why's our monitor labelling this an incident or hazard?
While the Alexa skills involve AI systems (voice recognition, natural language processing, and skill automation), the article does not describe any actual harm or incident resulting from these AI systems. The concerns are about potential privacy vulnerabilities and the need for better vetting, which could plausibly lead to harm if exploited, but no specific incident or direct harm is reported. Therefore, this is best classified as Complementary Information, as it provides context and awareness about AI-related privacy risks and encourages user action without reporting a concrete AI Incident or Hazard.
Thumbnail Image

Researchers discover huge security holes in Amazon's 'skills' for Alexa

2021-03-04
The Next Web
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Alexa's AI-powered skills) and describes security flaws that could plausibly lead to harm (unauthorized access to personal and financial data). However, since there is no current evidence of actual malicious exploitation or realized harm, it does not meet the threshold for an AI Incident. The potential for significant privacy and security harm makes it an AI Hazard. The article's main focus is on the risk and vulnerabilities discovered, not on actual harm or responses, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Study reveals extent of privacy vulnerabilities with Amazon's Alexa

2021-03-04
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through Amazon Alexa's voice-activated assistant and its third-party skills, which are AI-powered programs enabling user interaction. The study documents direct privacy harms caused by these AI systems' vulnerabilities, such as unauthorized data access and misleading invocation phrases leading to potential phishing. The researchers demonstrated that developers can modify skill behavior post-approval to collect more data, indicating a malfunction or misuse of the AI system's development and use. These factors meet the criteria for an AI Incident as the AI system's use has directly led to violations of privacy rights and harm to users' personal information.
Thumbnail Image

Amazon Alexa skills pose potential security threat according to study

2021-03-06
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Alexa's voice assistant skills, which use AI to process voice commands and enable functionalities. The study highlights that these AI-enabled skills can be exploited to cause privacy harms and potential security threats to users, such as unauthorized data access and enabling malicious skills. These harms relate to violations of privacy rights and potential harm to users' personal data, fitting the definition of an AI Incident as the AI system's use has directly or indirectly led to harm. Although no specific incident of harm is detailed, the research findings indicate realized vulnerabilities and risks that have materialized in the ecosystem, constituting an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Amazon's Alexa has multiple vulnerabilities which may put private information at risk - Study Finds

2021-03-05
Study Finds
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, Amazon's Alexa, which uses AI to process voice commands and interact with third-party skills. The vulnerabilities identified stem from the use and development of these AI-enabled third-party skills, which can access sensitive user information improperly. The harms include privacy violations and potential security breaches, which fall under violations of human rights and harm to individuals. Since these harms are demonstrated and ongoing risks are clear, this qualifies as an AI Incident rather than a mere hazard or complementary information. The study's recommendations and presentation at a security symposium provide context but do not change the primary classification.
Thumbnail Image

Study reveals extent of privacy vulnerabilities with Amazon's Alexa

2021-03-04
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Alexa skills) whose development and use have directly led to privacy harms, such as potential unauthorized access to personal data and phishing risks. These harms fall under violations of privacy rights and could cause significant harm to users. Since these harms are occurring or have occurred due to the AI system's use and vulnerabilities, this qualifies as an AI Incident.
Thumbnail Image

Here's why it's important to audit your Amazon Alexa skills (and how to do it) ((James Vincent)/The Verge)

2021-03-05
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Alexa skills use AI for voice interaction and functionality). The study identifies vulnerabilities that could lead to privacy harms, which is a violation of user rights and privacy. However, the article describes potential or existing vulnerabilities rather than a specific realized harm incident. Therefore, this qualifies as Complementary Information, providing important context and updates about AI system risks and governance rather than reporting a direct AI Incident or an imminent hazard.
Thumbnail Image

Revealing Extent Of Privacy Vulnerabilities With Amazon's Alexa - Eurasia Review

2021-03-05
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon Alexa's voice-activated assistant and its third-party skills, which use AI for voice interaction and processing). The study identifies actual privacy harms resulting from the use and development of these AI-powered skills, including misleading privacy policies and unauthorized data access, which constitute violations of user privacy rights. Since these harms are realized and directly linked to the AI system's use and development, this qualifies as an AI Incident under the framework.