Amazon's AI Chatbot Q Leaks Confidential Data Due to Hallucinations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Amazon's AI chatbot Q experienced severe hallucinations, leading to the leakage of confidential information such as AWS data center locations and internal programs. Employees flagged the incident as critical, prompting urgent engineering response. Despite Amazon's downplaying, the malfunction raised significant privacy and security concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Amazon's Q chatbot) is explicitly mentioned and is malfunctioning by generating hallucinations and leaking confidential information. These malfunctions have directly led to harmful outcomes or risks, such as bad legal advice that could cause health issues and harmful responses that could compromise customer accounts. The harms fall under injury or harm to persons and harm to property or rights. Hence, the event meets the criteria for an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityAccountabilityRespect of human rights

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
Business

Harm types
Human or fundamental rightsReputationalPublic interestEconomic/Property

Severity
AI incident

Business function:
Research and development

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Amazon's AI chatbot, Q, might be in the throes of a mental health crisis

2023-12-02
Business Insider
Why's our monitor labelling this an incident or hazard?
The AI system (Amazon's Q chatbot) is explicitly mentioned and is malfunctioning by generating hallucinations and leaking confidential information. These malfunctions have directly led to harmful outcomes or risks, such as bad legal advice that could cause health issues and harmful responses that could compromise customer accounts. The harms fall under injury or harm to persons and harm to property or rights. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Amazon's AI chatbot, Q, might be in the throes of a mental health crisis

2023-12-02
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's chatbot Q) whose malfunction (hallucinations and leaking confidential data) has directly led to realized harms, including potential security breaches and harmful advice that could affect employee health. The AI's outputs have caused or could cause harm to individuals and organizational security, fitting the definition of an AI Incident. Although Amazon has not identified security issues yet, the leaked internal communications confirm the AI's problematic behavior and its impact.
Thumbnail Image

Amazon's AI Chatbot Q Has Some Serious Accuracy and Privacy Issues

2023-12-04
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The AI system (Amazon's Q chatbot) is explicitly mentioned and is malfunctioning by hallucinating and leaking confidential data. This malfunction directly relates to potential violations of privacy and confidentiality, which fall under harm categories (c) violations of rights and (d) harm to property or communities. The leaked documents and employee reports indicate that harm is occurring or has occurred, not just a potential risk. Although Amazon disputes the claims, the presence of leaked confidential information and hallucinations causing inaccurate outputs supports classification as an AI Incident rather than a hazard or complementary information. The event involves the AI system's malfunction leading to harm, meeting the criteria for an AI Incident.
Thumbnail Image

Amazon Q AI "hallucinating" and leaking confidential data -- Report

2023-12-04
MyBroadband
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned (Amazon Q AI chatbot) and is reported to be hallucinating and leaking confidential data, which is a malfunction. The leaked data includes sensitive internal information, which constitutes a violation of obligations under applicable law protecting intellectual property and confidentiality. This meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm (data leakage). The denial by Amazon does not negate the reported internal severity and the documented leak. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Amazon's AI Reportedly Suffering "Severe Hallucinations" and "Leaking Confidential Data"

2023-12-04
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Amazon's AI system, Amazon Q, is suffering from severe hallucinations and leaking confidential data, including sensitive information like AWS data center locations and unreleased features. This constitutes a malfunction of the AI system leading to a breach of confidentiality and potential violation of privacy rights for businesses using the system. The harm is direct and realized, as engineers had to urgently address a severity 2 incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction and its impact on confidentiality and privacy.
Thumbnail Image

Why Amazon Q Deserves Another Chance

2023-12-05
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
Amazon Q is an AI system (a generative AI chatbot) that is reported to have hallucinated and leaked sensitive internal data shortly after its preview launch. These issues have caused concerns about privacy and accuracy, which are harms related to data security and potentially to business operations. The article states that these leaks and hallucinations have already occurred, indicating realized harm. Although Amazon denies a security breach, the reported leaking of sensitive information and hallucinations affecting employee trust and data confidentiality meet the criteria for an AI Incident. The involvement is through the AI system's use and malfunction (hallucinations and data leaks). Hence, this is not merely a potential risk or complementary information but an incident where harm has materialized.
Thumbnail Image

Amazon's new AI chatbot Amazon Q leaks confidential data, internal discount programs, and more - MSPoweruser

2023-12-02
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon Q) whose malfunction (hallucinations and data leakage) has directly led to the exposure of confidential information and potential security risks, which constitute harm to property and possibly to customers' security (a form of harm to persons or groups). This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to realized harm through data leakage and security vulnerabilities. The fact that the system is still in preview does not negate the occurrence of harm, as the leaked documents show actual incidents of data leakage and hallucinations causing risk.
Thumbnail Image

Amazon's new AI has gone haywire and is 'leaking confidential data'

2023-12-05
TweakTown
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's AI chatbot Q) whose malfunction has directly led to harm in the form of leaking confidential data, which constitutes harm to property, intellectual property rights, and potentially business operations. The leak of sensitive corporate data is a clear violation of confidentiality and security, fitting the definition of an AI Incident due to the realized harm caused by the AI system's malfunction.
Thumbnail Image

Amazon's AI chatbot, Q, might be in the throes of a mental health crisis

2023-12-02
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's generative AI chatbot Q) whose malfunction (hallucinations and data leaks) is directly causing harm by exposing confidential data and providing harmful advice. This constitutes violations of confidentiality and risks to customer accounts, which fall under harm to property and communities, as well as potential harm to individuals' health (stress or cardiac incidents). Therefore, this qualifies as an AI Incident.
Thumbnail Image

channelnews : Amazon's AI Chatbot Q Has Accuracy & Privacy Problems

2023-12-04
ChannelNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's AI chatbot Q) that is reportedly malfunctioning by hallucinating and leaking confidential data. These problems can cause harm to users and organizations through misinformation and privacy violations, which are harms to rights and potentially to business operations. The leaked documents indicate that these harms are occurring or have occurred, qualifying this as an AI Incident. Amazon's denial does not negate the reported harms from the leaks. Therefore, this event meets the criteria for an AI Incident due to realized harms linked to the AI system's malfunction and use.
Thumbnail Image

Amazon's Q Has 'Severe Hallucinations' and Leaks Confidential ... - Slashdot - Business Telegraph

2023-12-02
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's chatbot Q) whose malfunction (hallucinations and data leakage) has directly led to the exposure of confidential information, which constitutes harm related to privacy and security. Although Amazon downplays the issue, the leaked documents and employee concerns indicate realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction and the breach of confidentiality.
Thumbnail Image

Amazon's AI chatbot Q suffers "severe hallucinations," leaking confidential data

2023-12-03
THE DECODER
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Amazon's chatbot Q) that is malfunctioning by hallucinating and leaking confidential data such as AWS data center locations and internal discount programs. This leakage of confidential information can be considered harm to property or organizational security and a breach of obligations under applicable law or internal policies protecting such information. Although Amazon states no security issue has been identified, the internal labeling of the incident as 'sev 2' and the urgent engineering attention indicate a recognized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction leaking sensitive data.