Uncontrolled Enterprise AI Use Increases Cybersecurity and Data Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Lenovo survey of 6,000 employees worldwide reveals that over 70% use AI weekly, with up to a third doing so without IT oversight. This rise in 'shadow AI' expands attack surfaces, increases unmanaged risks, and heightens the likelihood of data exposure and cybersecurity threats due to insufficient governance and training.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly identifies that uncontrolled AI usage is already impacting business performance and increasing cybersecurity risks, which implies existing indirect harms such as increased likelihood of data breaches and operational disruption. However, it does not report a specific AI incident where harm has materialized. Instead, it describes a broad risk landscape and the need for better governance and control to prevent harm. This fits the definition of an AI Hazard, as the uncontrolled AI usage could plausibly lead to AI incidents involving data breaches, compliance failures, or operational disruptions. The article also includes information about Lenovo's security approach, but this is part of the broader context and response rather than the main focus. Therefore, the event is best classified as an AI Hazard.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Business processes and support servicesDigital security

Affected stakeholders
Business

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

70% of Enterprise AI is Uncontrolled, Driving Hidden Risk, Cost and Slower ROI

2026-04-27
Barchart.com
Why's our monitor labelling this an incident or hazard?
The article clearly identifies that uncontrolled AI usage is already impacting business performance and increasing cybersecurity risks, which implies existing indirect harms such as increased likelihood of data breaches and operational disruption. However, it does not report a specific AI incident where harm has materialized. Instead, it describes a broad risk landscape and the need for better governance and control to prevent harm. This fits the definition of an AI Hazard, as the uncontrolled AI usage could plausibly lead to AI incidents involving data breaches, compliance failures, or operational disruptions. The article also includes information about Lenovo's security approach, but this is part of the broader context and response rather than the main focus. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

70% of Enterprise AI is Uncontrolled, Driving Hidden Risk, Cost and Slower ROI | Al Bawaba

2026-04-28
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI usage across enterprises and the associated risks. However, it does not describe any specific event where AI use has directly or indirectly caused harm such as data breaches, operational disruption, or legal violations. Instead, it outlines the potential for such harms due to lack of governance and control, making it a plausible risk scenario rather than a realized incident. The main focus is on raising awareness of these risks and presenting a governance solution, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Up to a Third of Enterprise AI Use is Unmanaged, Finds Lenovo

2026-04-27
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems being used by employees beyond IT control, creating unmanaged AI environments that increase attack surfaces and cybersecurity risks. These risks could plausibly lead to AI incidents such as data breaches, scams, or operational disruptions. However, no actual harm or incident is reported as having occurred yet. The focus is on the potential for harm due to lack of governance and training, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems and their risks are central to the discussion.
Thumbnail Image

Latest News 70% of Enterprise AI is Uncontrolled, Driving Hidden Risk, Cost and Slower ROI - Businessfortnight

2026-04-27
Businessfortnight
Why's our monitor labelling this an incident or hazard?
The article identifies credible risks stemming from uncontrolled AI use in enterprises, such as increased cybersecurity threats and data exposure, which could plausibly lead to AI incidents. However, it does not report any actual harm or incident caused by AI systems. The focus is on raising awareness of these risks and describing governance challenges and mitigation strategies. Therefore, the event qualifies as an AI Hazard because it describes circumstances where AI use could plausibly lead to harm, but no specific AI Incident has occurred. It is not Complementary Information since it is not updating or responding to a prior incident, nor is it unrelated as it clearly involves AI systems and their risks.