Enterprise Use of Generative AI Tools Raises Data Security Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Research by Harmonic and Netskope reveals a surge in enterprise use of generative AI tools, with employees uploading sensitive data and adopting unsanctioned on-premise AI platforms. This widespread, unregulated use increases the risk of data exposure and security breaches, posing significant challenges for IT security teams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses generative AI systems (on-premise GenAI platforms) being used in ways that increase security risks, including potential data leakage or theft. Although no actual harm is reported, the situation clearly presents a credible risk of harm due to the lack of authentication and guardrails in these unsanctioned AI deployments. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to incidents involving harm to property or communities (through data breaches).[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rightsSafety

Industries
Digital securityIT infrastructure and hostingBusiness processes and support services

Affected stakeholders
Business

Harm types
Human or fundamental rightsEconomic/PropertyReputational

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI-led disruption opens new career paths, not just trigger job losses, say BCG execs

2025-08-05
Economic Times
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report a specific event where AI use or malfunction led to injury, rights violations, or other harms. It focuses on the evolving role of AI in work, the need for training, and governance to mitigate risks. Therefore, it is best classified as Complementary Information, as it provides context and understanding about AI's societal and organizational impact without reporting a new AI Incident or AI Hazard.
Thumbnail Image

The Open University

2025-08-01
The Open University
Why's our monitor labelling this an incident or hazard?
The content focuses on principles, recommendations, and evolving guidelines for the use of Generative AI in academia. There is no mention of any actual harm, incident, or plausible future harm caused by AI systems. It is an example of governance or societal response to AI developments, aimed at ensuring responsible use and integrity, which fits the definition of Complementary Information.
Thumbnail Image

Proliferation of on-premise GenAI platforms is widening security ri...

2025-08-04
Computer Weekly
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses generative AI systems (on-premise GenAI platforms) being used in ways that increase security risks, including potential data leakage or theft. Although no actual harm is reported, the situation clearly presents a credible risk of harm due to the lack of authentication and guardrails in these unsanctioned AI deployments. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to incidents involving harm to property or communities (through data breaches).
Thumbnail Image

Your employees uploaded over a gig of files to GenAI tools last quarter - IT Security News

2025-08-05
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI tools) where sensitive data is being uploaded, which could plausibly lead to harm such as data breaches or violations of privacy and security. Although no specific harm is reported as having occurred yet, the exposure of sensitive data through AI tools represents a credible risk of future harm. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information.