Amazon Warns Employees Against Sharing Confidential Data with ChatGPT

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Amazon has warned employees not to input confidential company information, including code, into ChatGPT due to concerns that such data could be used to train the AI and potentially leak proprietary information. The warning follows instances where ChatGPT outputs resembled internal Amazon data, highlighting risks of data privacy and intellectual property breaches.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) and concerns about its use potentially leading to the exposure of confidential information, which could be a violation of intellectual property rights if it were to happen. However, the article describes a preventive warning rather than an actual incident of harm or breach. Therefore, this situation represents a plausible risk of harm rather than realized harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rights

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI hazard

Business function:
ICT management and information securityResearch and development

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Amazon Warns Employees to Beware of ChatGPT

2023-01-26
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about its use potentially leading to the exposure of confidential information, which could be a violation of intellectual property rights if it were to happen. However, the article describes a preventive warning rather than an actual incident of harm or breach. Therefore, this situation represents a plausible risk of harm rather than realized harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT

2023-01-25
Futurism
Why's our monitor labelling this an incident or hazard?
Amazon's internal warning about employees inputting confidential data into ChatGPT highlights a plausible risk that the AI system's use could lead to violations of intellectual property rights or confidentiality breaches. Since no actual leak or harm has been reported, but the risk is credible and recognized by the company, this event fits the definition of an AI Hazard rather than an Incident. The AI system's involvement is through its use by employees and the potential for future harm if confidential data is incorporated into AI training or outputs.
Thumbnail Image

Amazon Employees Using ChatGPT for Coding and Customer Service Warned Not to Share Company Information With AI Chatbot

2023-01-27
Voicebot.ai
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by Amazon employees, with concerns about data privacy and potential unauthorized use of confidential information in AI training. Although employees are warned and the company has not banned ChatGPT, the possibility that sensitive data could be embedded in AI training data poses a credible risk of harm to intellectual property rights and corporate confidentiality. Since no actual data breach or harm has been reported, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if confidential information is leaked or misused. The mention of Amazon's own AI tools with safeguards indicates ongoing efforts to manage these risks but does not change the classification.
Thumbnail Image

Amazon warns employees against use of ChatGPT: Report

2023-01-28
Techcircle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT) and concerns about the use of confidential Amazon data as input to these AI systems, which could lead to intellectual property rights violations or confidentiality breaches. No actual harm or incident has been reported; rather, Amazon is warning employees to prevent such harm. This fits the definition of an AI Hazard, where the use of AI systems could plausibly lead to harm (here, data leakage and IP violations). There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the warning about potential risks, not on responses to past incidents or general AI ecosystem updates. It is not unrelated because AI systems and their risks are central to the event.
Thumbnail Image

Amazon warns employees from sharing confidential information with ChatGPT - Employment News

2023-01-28
Employment News - WhatJobs News
Why's our monitor labelling this an incident or hazard?
The article describes Amazon warning employees not to share confidential information with ChatGPT due to risks of data leakage, which is a plausible future harm related to the use of an AI system. There is no evidence of actual harm or breach yet, only preventive warnings and internal safeguards. The AI system's involvement is clear, and the potential for violation of confidentiality policies and intellectual property rights exists if misuse occurs. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Amazonが社員に注意喚起した「ChatGPTに共有してはいけないモノ」

2023-01-28
GIZMODO JAPAN(ギズモード・ジャパン)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns about the sharing of confidential corporate data with it. Although no direct harm has occurred yet, the risk that proprietary information could be incorporated into AI training data and subsequently leaked or exposed represents a plausible future harm. Therefore, this situation qualifies as an AI Hazard because it describes a credible risk stemming from the use of an AI system that could lead to violations of intellectual property rights or harm to corporate property if the risk materializes. The article does not describe an actual incident of harm but a preventive internal warning to mitigate potential future harm.