AI Adoption Drives Surge in Cloud Security Attacks, Report Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Palo Alto Networks' 2025 State of Cloud Security Report reveals that 99% of organizations experienced attacks on their AI systems in the past year. The rapid adoption of enterprise AI and GenAI-assisted coding has expanded the cloud attack surface, leading to increased data breaches and security vulnerabilities that outpace current security measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that 99% of organizations experienced attacks on AI systems in the past year, indicating realized harm. The involvement of AI systems is clear, including generative AI-assisted coding and AI workloads in the cloud. The harms include security breaches, data theft, and operational disruption, which are direct consequences of AI system use and expansion. The report also discusses the inadequacy of current security measures to keep pace with AI-driven threats, confirming the materialization of harm rather than just potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securityPrivacy & data governanceRespect of human rights

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessConsumers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Palo Alto Networks Report Reveals AI Is Driving a Massive Cloud Attack Surface Expansion

2025-12-16
Barchart.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that 99% of organizations experienced attacks on AI systems in the past year, indicating realized harm. The involvement of AI systems is clear, including generative AI-assisted coding and AI workloads in the cloud. The harms include security breaches, data theft, and operational disruption, which are direct consequences of AI system use and expansion. The report also discusses the inadequacy of current security measures to keep pace with AI-driven threats, confirming the materialization of harm rather than just potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Palo Alto Networks Report Reveals AI Is Driving a Massive Cloud Attack Surface Expansion

2025-12-16
IT News Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI apps and services in cloud environments) and discusses how their use has directly led to a significant increase in cybersecurity attacks, which constitute harm to property and potentially to organizations' operations. The attacks on AI systems and the resulting security vulnerabilities represent an AI Incident because the development and use of AI systems have directly contributed to realized harm through increased exposure to cyberattacks and security risks. Therefore, this is classified as an AI Incident rather than a hazard or complementary information, as the harm is occurring and documented.
Thumbnail Image

Palo Alto Networks Report Reveals AI Is Driving a Massive Cloud Attack Surface Expansion

2025-12-16
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being targeted and attacked, with 99% of organizations experiencing attacks on AI apps and services. The attacks exploit AI-driven cloud infrastructure and insecure AI-generated code, leading to security breaches and risks to critical cloud environments. This constitutes direct harm linked to the use and deployment of AI systems, fulfilling the criteria for an AI Incident. The report does not merely warn of potential risks but documents realized attacks and their consequences, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

Where Cloud Security Stands Today and Where AI Breaks It

2025-12-16
Palo Alto Networks Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems running in production environments being attacked, with 99% of organizations reporting at least one attack on their AI systems within the past year. It describes realized harms including data exfiltration, insecure code reaching production, and exploitation of cloud infrastructure vulnerabilities. These harms fall under the definitions of AI Incident, as the AI systems' use and the rapid adoption of AI have directly or indirectly led to these security breaches and harms. The detailed statistics and examples confirm that the harms are materialized, not just potential. Therefore, the classification as an AI Incident is justified.
Thumbnail Image

Palo Alto Networks report says AI is driving massive cloud attack surface expansion

2025-12-16
Fierce Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being attacked and AI-assisted code generation leading to security vulnerabilities, which are harms related to AI use. However, it does not describe a specific event where an AI system's malfunction or misuse directly caused harm or disruption. Instead, it reports survey findings and general trends about AI-driven cloud security risks, which supports understanding of AI-related threats and informs stakeholders. This fits the definition of Complementary Information, as it provides context and updates on AI-related risks without detailing a discrete AI Incident or Hazard.
Thumbnail Image

How Cloud-Native Engineering and AI Are Transforming Modern Software Security

2025-12-17
Silicon India
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or direct/indirect incident caused by AI systems, nor does it describe a specific event where AI use could plausibly lead to harm imminently. Instead, it discusses general security challenges, mitigation strategies, and the potential of AI in cloud security, along with cautionary advice. This aligns with the definition of Complementary Information, as it provides context, expert perspective, and governance considerations without reporting a new AI Incident or AI Hazard.
Thumbnail Image

AI Fuelling "Unprecedented" Cloud Attacks, Warns Palo Alto

2025-12-17
Digit
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it discusses AI applications and services in cloud environments being targeted by sophisticated cyberattacks. The harms described include data leaks, credential theft, and exploitation of AI system vulnerabilities, which constitute violations of security and privacy, thus harming organizations and potentially individuals. Since these harms have already occurred (99% of organizations experienced attacks), this qualifies as an AI Incident. The article does not merely warn of potential future harm but documents ongoing realized harm linked to AI system use and deployment.
Thumbnail Image

AI fuels escalating cloud security risks, Palo Alto Networks report reveals

2025-12-17
SC Media
Why's our monitor labelling this an incident or hazard?
The report explicitly states that nearly all surveyed organizations have experienced attacks on AI systems, including data exfiltration and credential compromises, which are harms to property and potentially to organizations' operations. The involvement of AI systems in these attacks and vulnerabilities is clear, as is the direct link to realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harms are occurring and the AI systems' role is pivotal in these security breaches.
Thumbnail Image

Cloud security teams are in turmoil as attack surfaces expand at an alarming rate

2025-12-18
channelpro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI-assisted coding) whose outputs (insecure code) have directly led to increased successful attacks on cloud infrastructure, causing harm through breaches, credential theft, and data exfiltration. The harms are realized and ongoing, with security teams overwhelmed and incidents occurring rapidly. The AI system's use is a contributing factor to these harms, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm (which would be a hazard) nor focus on responses or updates (which would be complementary information).
Thumbnail Image

Google, Palo Alto may have struck Google Cloud's largest security services deal ever: Why it is 'worrying' for Amazon, Microsoft

2025-12-20
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The article details a large-scale AI-related business deal but does not describe any direct or indirect harm caused by AI system development, use, or malfunction. There is no indication of injury, rights violations, disruption, or other harms linked to the AI systems in question. The AI involvement is in the context of developing new security solutions and infrastructure, which is a positive development rather than a hazard or incident. Therefore, this is best classified as Complementary Information, providing context on AI ecosystem developments and market dynamics without reporting an AI Incident or AI Hazard.
Thumbnail Image

Google Cloud Strikes Nearly $10 Billion AI Security Deal With Palo Alto Networks

2025-12-19
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of cybersecurity solutions and AI infrastructure, but it does not describe any direct or indirect harm caused by AI system development, use, or malfunction. There is no indication of an AI Incident or AI Hazard since no harm has occurred or is plausibly imminent based on the article. The focus is on corporate strategy, investment, and product development, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting new harm or credible future harm.
Thumbnail Image

New research reveals AI is fueling an 'unprecedented surge in cloud security risks'

2025-12-19
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI-powered cloud services, large language models) and discusses how their deployment and management have directly led to increased cloud security incidents, particularly identity-related breaches. The harms described include disruption to cloud infrastructure security and increased vulnerability to adversaries, which fits the definition of harm to critical infrastructure management and operation. Since actual security incidents have occurred linked to AI system use and misconfiguration, this qualifies as an AI Incident rather than a hazard or complementary information. The report's focus on realized security risks and incidents caused or exacerbated by AI system deployment justifies this classification.
Thumbnail Image

Palo Alto, Google Cloud Strike Multibillion-Dollar Deal

2025-12-19
TechRepublic
Why's our monitor labelling this an incident or hazard?
The article centers on a business partnership and security enhancement initiative addressing the rising threat of attacks on AI infrastructure. While it acknowledges that attacks on AI systems are widespread, it does not detail any particular AI Incident or AI Hazard event occurring as a result of AI system development, use, or malfunction. The main content is about the companies' response and investment to improve AI security, which fits the definition of Complementary Information as it provides context and governance response to AI-related risks without describing a new harm or plausible future harm event itself.
Thumbnail Image

Cloud Security Operations with AI-Driven Threat Detection

2025-12-19
dzone.com
Why's our monitor labelling this an incident or hazard?
The content is a general discussion about AI applications in cloud security and threat detection, emphasizing potential and ongoing benefits without reporting any realized harm, incident, or specific threat caused by AI systems. There is no description of an AI Incident or AI Hazard occurring, nor is there a focus on responses to such events. Therefore, the article fits best as Complementary Information, providing context and understanding of AI's role in cloud security rather than reporting a new incident or hazard.
Thumbnail Image

Analysis: Google Cloud Inks An Interesting Deal With Palo Alto Networks

2025-12-19
CRN
Why's our monitor labelling this an incident or hazard?
The article centers on an expanded collaboration and acquisition in the AI security sector, involving AI systems as part of cloud security platforms. However, it does not describe any event where AI systems have caused or could plausibly cause harm. The focus is on business developments and strategic positioning rather than on any harm or risk from AI system use or malfunction. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI ecosystem developments without reporting an AI Incident or AI Hazard.
Thumbnail Image

Rapid AI Adoption in Cloud Heightens Security Risks and Breaches

2025-12-19
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered cloud services and AI agents causing data leaks and security breaches, which are harms to property and communities. The breaches have already occurred, indicating realized harm rather than potential harm. The AI systems' deployment and use have directly contributed to these harms through misconfigurations, excessive permissions, and vulnerabilities in AI workloads. Hence, this is an AI Incident rather than a hazard or complementary information. The detailed description of incidents and their consequences supports this classification.
Thumbnail Image

Google Cloud's $10 Billion Bet on Palo Alto Shields AI Era

2025-12-19
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-driven cyber threats, AI runtime security tools) and their use in cybersecurity. However, the article does not describe any direct or indirect harm caused by AI systems, nor does it report a plausible imminent risk of harm from AI systems themselves. Instead, it details a large-scale security partnership aimed at mitigating AI-related threats, representing a governance and technical response to AI risks. This fits the definition of Complementary Information, as it updates on societal and technical responses to AI threats without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Palo Alto Networks And Google Cloud Team Up On AI Security

2025-12-19
Finimize
Why's our monitor labelling this an incident or hazard?
The article discusses a collaboration aimed at improving AI security infrastructure and safeguarding AI systems, but it does not report any actual harm or incidents caused by AI systems, nor does it describe a specific event where AI caused or could plausibly cause harm. Instead, it highlights a strategic initiative to prevent potential AI-related security issues. Therefore, it is best classified as Complementary Information, as it provides context and updates on governance and technical responses to AI risks without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

Palo Alto Networks Fuels Google Cloud Pact to Guard AI Stack

2025-12-19
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of AI security infrastructure, but it does not describe any incident or hazard involving harm or plausible harm. It is a news report about a business deal and strategic collaboration to improve AI security capabilities. There is no direct or indirect harm, nor a credible risk of harm described. Therefore, it fits the category of Complementary Information, as it provides context and updates on AI ecosystem developments and governance-related responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

Palo Alto Networks and Google Cloud Expand Partnership to Help Customers Build and Secure AI with Confidence - InfotechLead

2025-12-19
InfotechLead
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm or incident caused by AI systems, nor does it describe a specific event where AI systems led to injury, rights violations, or other harms. Instead, it discusses a strategic partnership and security measures designed to mitigate AI-related risks. This fits the definition of Complementary Information, as it provides context and updates on governance and technical responses to AI security challenges, without describing a new AI Incident or AI Hazard.
Thumbnail Image

AI系統成攻擊目標?業者:企業資安風險進入新層級

2026-01-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems as targets of cyberattacks and discusses the increased cybersecurity risks associated with AI adoption in enterprises. While it highlights that 99% of surveyed companies experienced attacks targeting AI systems, it does not provide details of specific incidents causing direct or indirect harm. The focus is on the elevated risk level and the potential for harm due to vulnerabilities and attack vectors related to AI systems. Therefore, the event describes a credible and plausible risk of harm stemming from AI system use and integration, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI 加速雲端攻擊面擴張,Palo Alto Networks 報告揭示企業資安新風險 | yam News

2026-01-07
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it discusses attacks on AI applications and services in cloud environments. The harms include realized cybersecurity breaches and operational risks to critical cloud infrastructure, which align with harm categories (b) disruption of critical infrastructure and (e) other significant harms where AI's role is pivotal. The AI systems' use and their vulnerabilities have directly led to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are occurring and linked to AI system exploitation.
Thumbnail Image

Palo Alto Networks 報告揭示:AI 正在推動雲端攻擊面大幅擴張

2026-01-07
123.briian.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, both as targets of attacks and as tools used by attackers to generate insecure code and accelerate attack pace. The harm is realized in the form of increased cloud security risks, vulnerability accumulation, and operational challenges for security teams, which can lead to breaches or disruptions. This fits the definition of an AI Incident because the development, use, or malfunction of AI systems has directly or indirectly led to harm to property and communities (organizations and their data), and disruption of critical infrastructure management (cloud security operations).
Thumbnail Image

AI系統成攻擊目標?業者:企業資安風險進入新層級

2026-01-07
工商時報
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI models and AI-driven development within cloud infrastructures. It focuses on the use and deployment of AI systems and the associated cybersecurity risks. Although no actual harm event is reported, the widespread targeting of AI systems by attackers and the accumulation of vulnerabilities in cloud environments plausibly could lead to AI Incidents such as data breaches, operational disruptions, or other harms. Therefore, this situation constitutes an AI Hazard, as it describes credible risks and potential future harms stemming from AI system use and vulnerabilities in enterprise cloud environments.