Widespread Corporate Data Leaks via ChatGPT Use by Employees

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by LayerX Security found that 77% of employees paste sensitive company data, including PII and payment card information, into generative AI tools like ChatGPT, often using unmanaged personal accounts. This widespread practice exposes organizations to significant data leaks, regulatory violations, and compliance risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (generative AI tools such as ChatGPT and Microsoft Copilot) whose use by employees has directly led to data leaks. These leaks represent a breach of confidentiality and intellectual property rights, which are protected under applicable laws. The harm is realized and ongoing, as the report states these leaks are now the leading source of workplace data leaks and often go unnoticed by security systems. Therefore, this qualifies as an AI Incident due to the direct link between AI system use and harm to company data and rights.[AI generated]
AI principles
Privacy & data governanceAccountabilityRobustness & digital security

Industries
Digital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Employees are accidentally leaking company data through ChatGPT, report warns

2025-10-10
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI tools such as ChatGPT and Microsoft Copilot) whose use by employees has directly led to data leaks. These leaks represent a breach of confidentiality and intellectual property rights, which are protected under applicable laws. The harm is realized and ongoing, as the report states these leaks are now the leading source of workplace data leaks and often go unnoticed by security systems. Therefore, this qualifies as an AI Incident due to the direct link between AI system use and harm to company data and rights.
Thumbnail Image

Watch out - your workers might be pasting company secrets into ChatGPT

2025-10-08
TechRadar
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI tools such as ChatGPT) and their use by employees. The risks described relate to potential data leakage and compliance violations due to unmonitored use, which could plausibly lead to harms such as violations of privacy rights and harm to company property or data security. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this situation fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to an AI Incident involving data leakage and compliance breaches.
Thumbnail Image

77% of Employees Leak Data via ChatGPT, Report Finds

2025-10-10
TechRepublic
Why's our monitor labelling this an incident or hazard?
The report explicitly states that employees are sharing sensitive data through AI tools, leading to unauthorized data movement and regulatory compliance risks. The AI system's use is directly linked to these harms, as the generative AI tools are the medium through which data leaks occur. The harms are realized and significant, including potential breaches of privacy laws and corporate data security. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to property (corporate data).
Thumbnail Image

Employees regularly paste company secrets into ChatGPT

2025-10-07
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems (ChatGPT and similar tools) being used by employees in ways that expose sensitive data, which constitutes a violation of privacy and potentially intellectual property rights. The leakage of PII and PCI data through AI tool usage directly implicates harm to individuals and organizations. The mention of Samsung's ban following a data leak incident confirms that harm has occurred or is ongoing. The uncontrolled and shadow IT usage of AI tools exacerbates the risk and actual occurrence of harm. Hence, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Study finds 77% of employees leak data through ChatGPT

2025-10-10
Business Insurance
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (ChatGPT) in a way that leads to unauthorized data leaks, which constitutes a violation of intellectual property and confidentiality rights. Since the AI system's use directly contributes to harm (data leakage), this qualifies as an AI Incident under the framework.
Thumbnail Image

77% of Employees Share Company Secrets on ChatGPT Compromising Enterprise Policies

2025-10-09
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (e.g., ChatGPT) by employees to paste and share sensitive corporate data, including personally identifiable information and payment card data, through unmanaged personal accounts. This behavior directly leads to harm in the form of violations of data protection regulations (e.g., GDPR, HIPAA), breaches of enterprise policies, and exposure of sensitive information, which constitute violations of legal obligations and harm to property and communities. The AI system's use is central to the incident as it is the primary vector for unauthorized data exfiltration. Therefore, this qualifies as an AI Incident.