AI-Enabled Cyberattacks Surge, Slashing Breakout Times to Under 30 Minutes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

CrowdStrike's 2026 Global Threat Report reveals an 89% surge in AI-enabled cyberattacks, with criminals using generative AI tools to automate and accelerate breaches. Average breakout time dropped to 29 minutes in 2025, with some attacks taking just seconds, leading to rapid data theft and compromised enterprise systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems both as tools used maliciously by adversaries (e.g., injecting malicious prompts into generative AI tools) and as targets of exploitation, leading to significant harms including financial theft, data breaches, and disruption of enterprise security. These harms fall under violations of property and harm to organizations, and the AI system's role is pivotal in enabling and accelerating these attacks. Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
Barchart.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems both as tools used maliciously by adversaries (e.g., injecting malicious prompts into generative AI tools) and as targets of exploitation, leading to significant harms including financial theft, data breaches, and disruption of enterprise security. These harms fall under violations of property and harm to organizations, and the AI system's role is pivotal in enabling and accelerating these attacks. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by malicious actors to conduct cyber intrusions and theft, which have directly led to significant financial harm and breaches of security. The AI systems are weaponized to accelerate attacks, evade detection, and compromise critical infrastructure such as cloud environments. The harms described include property loss (cryptocurrency theft), disruption of cloud infrastructure, and violations of security and privacy rights. These meet the criteria for an AI Incident as the AI system's use has directly led to realized harms.
Thumbnail Image

CrowdStrike 2026 Global Threat Report: Evasive Adversary Wields AI

2026-02-24
crowdstrike.com
Why's our monitor labelling this an incident or hazard?
The report explicitly states that adversaries used AI tools to increase attack volume and effectiveness, exploited AI development platforms, and caused significant financial theft and security breaches. These constitute direct harms caused by AI-enabled cyberattacks, fitting the definition of an AI Incident. The harms include financial loss, breaches of security, and violations of data integrity and confidentiality, which align with harms to property, communities, and potentially human rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Stockwatch

2026-02-24
Stockwatch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used maliciously and exploited, leading to realized harms such as financial theft (e.g., $1.46B cryptocurrency theft), rapid data exfiltration, and intrusion into cloud and enterprise systems. The AI system's development, use, and malfunction (including adversarial exploitation) have directly contributed to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing significant harm.
Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used maliciously by adversaries to conduct cyberattacks that have resulted in actual harm, including data breaches, financial theft, and disruption of enterprise security. The use of AI to accelerate attacks and exploit AI platforms is central to the incident. This fits the definition of an AI Incident because the development and use of AI systems have directly led to violations of security and harm to property and communities (financial and data security). The report is not merely a warning or a general update but documents ongoing and realized harms caused by AI-enabled attacks.
Thumbnail Image

AI-powered Cyber-Attacks Up Significantly, Warns CrowdStrike

2026-02-24
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (machine learning, large language models) by threat actors to conduct cyber-attacks that have already occurred and caused harm, such as espionage campaigns and phishing attacks. The harms include violations of privacy, security breaches, and potential disruption to targeted entities. The AI involvement is direct and integral to the attacks, fulfilling the criteria for an AI Incident. The report also discusses ongoing and future risks, but the primary focus is on realized harms from AI-enabled cyber-attacks.
Thumbnail Image

CrowdStrike sees 89% rise in AI-enabled cyberattacks

2026-02-24
Telecoms.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use and misuse of AI systems by hackers to conduct cyberattacks that have already resulted in harm, such as data theft and network breaches. The AI systems' development and use have directly led to realized harms, including unauthorized access and exfiltration of sensitive information. The involvement of AI is central to the incident, as AI accelerates the attack process and enables new attack vectors. Hence, this is not merely a potential risk or complementary information but a clear AI Incident.
Thumbnail Image

CrowdStrike Warns AI Is Fueling Faster Cyberattacks - They Can Now Spread In Under 30 Minutes

2026-02-24
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used maliciously to conduct cyberattacks, which have directly led to harm in the form of breaches, data theft, and compromised AI platforms. The use of AI to speed up attacks and exploit vulnerabilities constitutes the use and malfunction of AI systems causing harm to property and communities (corporate entities and their stakeholders). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm.
Thumbnail Image

AI Cyber-crime Up 89% as Breakout Time Falls to 29 Minutes

2026-02-24
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by cybercriminal groups and state actors to conduct attacks that have caused real harm, such as ransomware deployment, credential theft, and a record cryptocurrency theft. The AI systems are integral to the attacks and their rapid execution, indicating direct involvement in causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
Crypto Reporter
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously by adversaries to conduct cyberattacks that have already occurred, causing harm to organizations by compromising their security and accelerating attack timelines. This constitutes direct harm through the use and exploitation of AI systems, fitting the definition of an AI Incident as the AI system's use and malfunction (exploitation) have directly led to harm (security breaches, data compromise, operational disruption).
Thumbnail Image

How AI is helping criminals target more victims online

2026-02-24
Narooma News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI tools, large language models) being used by criminals to conduct cyberattacks that have directly led to harm, including theft of information and money. The involvement of AI in creating convincing phishing emails and malicious code, speeding up attacks, and exploiting vulnerabilities shows the AI system's use has directly contributed to these harms. The harms fall under harm to property and communities. The event is not merely a potential risk but describes ongoing and increasing AI-enabled criminal activity causing actual harm, fitting the definition of an AI Incident.
Thumbnail Image

CrowdStrike report finds AI-driven attacks up 89% in 2025 | Back End News

2026-02-24
Back End News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems by adversaries to conduct cyberattacks that have directly caused harm to organizations and individuals, including theft of data and cryptocurrency, exploitation of vulnerabilities, and disruption of cloud infrastructure. The AI systems' development and use have directly led to realized harms such as financial losses and security breaches. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing significant harm.
Thumbnail Image

CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 - and some attacks take just seconds

2026-02-24
IT Pro
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems are being exploited and misused by threat actors to conduct cyber attacks that have already caused harm, such as credential theft, ransomware deployment, and data exfiltration. The AI systems are both targets and tools in these attacks, with malicious prompt injections and AI-generated malware accelerating harmful outcomes. This constitutes an AI Incident because the AI system's use and misuse have directly led to realized harms including violations of security and privacy, disruption of enterprise operations, and harm to communities relying on secure digital infrastructure.
Thumbnail Image

AI Arms Race Shrinks Breakout Time to 29 Minutes as Adversaries Turn GenAI on the Enterprise

2026-02-24
IT Security Guru
Why's our monitor labelling this an incident or hazard?
The report explicitly details the use of AI-enabled tools by malicious actors to conduct cyberattacks that have already caused harm, including a $1.46 billion cryptocurrency theft and rapid lateral movement within compromised networks. It also documents direct attacks on AI systems themselves, which have led to credential and cryptocurrency theft. The harms described include financial loss, disruption of enterprise operations, and breaches of security, which align with harm to property and communities. The AI systems' development, use, and malfunction (including exploitation) are central to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are realized and directly linked to AI system use and attacks.
Thumbnail Image

Threat groups moving at record speeds, as AI helps scale attacks

2026-02-24
Cybersecurity Dive
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI systems being used by threat actors to automate and speed up malicious cyber operations, which have caused realized harm to organizations through data exfiltration and credential theft. The AI systems are integral to the attacks' success and have directly contributed to violations of security and privacy, which fall under harm to communities and property. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-enabled cyberattacks.
Thumbnail Image

Threat Actors Weaponized AI Tools to Gain Full Domain Access within 30 Minutes

2026-02-24
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., ChatGPT, Gemini, Claude, Qwen2.5-Coder-32B-Instruct) being used by threat actors to generate malicious scripts, perform reconnaissance, and automate attacks. These AI-enabled attacks have directly led to harms such as unauthorized access, data exfiltration attempts, and operational disruption. The involvement of AI in the development and use of these attack tools and the documented incidents of harm meet the criteria for an AI Incident. The harms are realized, not just potential, and the AI system's role is pivotal in enabling the rapid and sophisticated nature of these cyberattacks.
Thumbnail Image

Crowdstrike says cyber criminals are rapidly adopting AI for cyber crime

2026-02-25
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs, AI coding assistants, AI image manipulation) being used by threat actors to carry out cybercrime and disinformation campaigns that have already caused harm. The harms include espionage, misinformation affecting elections, and cyber attacks, which fall under violations of rights and harm to communities. The AI systems are central to these harms, either by generating convincing phishing emails, enabling malware operations, or creating fake personas and deepfakes. This direct involvement of AI in causing realized harm meets the criteria for an AI Incident rather than a hazard or complementary information.