AI-Enabled Cyberattacks Surge, Slashing Breakout Times to Under 30 Minutes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

CrowdStrike's 2026 Global Threat Report reveals an 89% surge in AI-enabled cyberattacks, with criminals using generative AI tools to automate and accelerate breaches. Average breakout time dropped to 29 minutes in 2025, with some attacks taking just seconds, leading to rapid data theft and compromised enterprise systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems both as tools used maliciously by adversaries (e.g., injecting malicious prompts into generative AI tools) and as targets of exploitation, leading to significant harms including financial theft, data breaches, and disruption of enterprise security. These harms fall under violations of property and harm to organizations, and the AI system's role is pivotal in enabling and accelerating these attacks. Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
Barchart.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems both as tools used maliciously by adversaries (e.g., injecting malicious prompts into generative AI tools) and as targets of exploitation, leading to significant harms including financial theft, data breaches, and disruption of enterprise security. These harms fall under violations of property and harm to organizations, and the AI system's role is pivotal in enabling and accelerating these attacks. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by malicious actors to conduct cyber intrusions and theft, which have directly led to significant financial harm and breaches of security. The AI systems are weaponized to accelerate attacks, evade detection, and compromise critical infrastructure such as cloud environments. The harms described include property loss (cryptocurrency theft), disruption of cloud infrastructure, and violations of security and privacy rights. These meet the criteria for an AI Incident as the AI system's use has directly led to realized harms.
Thumbnail Image

CrowdStrike 2026 Global Threat Report: Evasive Adversary Wields AI

2026-02-24
crowdstrike.com
Why's our monitor labelling this an incident or hazard?
The report explicitly states that adversaries used AI tools to increase attack volume and effectiveness, exploited AI development platforms, and caused significant financial theft and security breaches. These constitute direct harms caused by AI-enabled cyberattacks, fitting the definition of an AI Incident. The harms include financial loss, breaches of security, and violations of data integrity and confidentiality, which align with harms to property, communities, and potentially human rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Stockwatch

2026-02-24
Stockwatch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used maliciously and exploited, leading to realized harms such as financial theft (e.g., $1.46B cryptocurrency theft), rapid data exfiltration, and intrusion into cloud and enterprise systems. The AI system's development, use, and malfunction (including adversarial exploitation) have directly contributed to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing significant harm.
Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used maliciously by adversaries to conduct cyberattacks that have resulted in actual harm, including data breaches, financial theft, and disruption of enterprise security. The use of AI to accelerate attacks and exploit AI platforms is central to the incident. This fits the definition of an AI Incident because the development and use of AI systems have directly led to violations of security and harm to property and communities (financial and data security). The report is not merely a warning or a general update but documents ongoing and realized harms caused by AI-enabled attacks.
Thumbnail Image

AI-powered Cyber-Attacks Up Significantly, Warns CrowdStrike

2026-02-24
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (machine learning, large language models) by threat actors to conduct cyber-attacks that have already occurred and caused harm, such as espionage campaigns and phishing attacks. The harms include violations of privacy, security breaches, and potential disruption to targeted entities. The AI involvement is direct and integral to the attacks, fulfilling the criteria for an AI Incident. The report also discusses ongoing and future risks, but the primary focus is on realized harms from AI-enabled cyber-attacks.
Thumbnail Image

CrowdStrike sees 89% rise in AI-enabled cyberattacks

2026-02-24
Telecoms.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use and misuse of AI systems by hackers to conduct cyberattacks that have already resulted in harm, such as data theft and network breaches. The AI systems' development and use have directly led to realized harms, including unauthorized access and exfiltration of sensitive information. The involvement of AI is central to the incident, as AI accelerates the attack process and enables new attack vectors. Hence, this is not merely a potential risk or complementary information but a clear AI Incident.
Thumbnail Image

CrowdStrike Warns AI Is Fueling Faster Cyberattacks - They Can Now Spread In Under 30 Minutes

2026-02-24
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used maliciously to conduct cyberattacks, which have directly led to harm in the form of breaches, data theft, and compromised AI platforms. The use of AI to speed up attacks and exploit vulnerabilities constitutes the use and malfunction of AI systems causing harm to property and communities (corporate entities and their stakeholders). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm.
Thumbnail Image

AI Cyber-crime Up 89% as Breakout Time Falls to 29 Minutes

2026-02-24
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by cybercriminal groups and state actors to conduct attacks that have caused real harm, such as ransomware deployment, credential theft, and a record cryptocurrency theft. The AI systems are integral to the attacks and their rapid execution, indicating direct involvement in causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

2026 CrowdStrike Global Threat Report: AI Accelerates Adversaries and Reshapes the Attack Surface

2026-02-24
Crypto Reporter
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously by adversaries to conduct cyberattacks that have already occurred, causing harm to organizations by compromising their security and accelerating attack timelines. This constitutes direct harm through the use and exploitation of AI systems, fitting the definition of an AI Incident as the AI system's use and malfunction (exploitation) have directly led to harm (security breaches, data compromise, operational disruption).
Thumbnail Image

How AI is helping criminals target more victims online

2026-02-24
Narooma News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI tools, large language models) being used by criminals to conduct cyberattacks that have directly led to harm, including theft of information and money. The involvement of AI in creating convincing phishing emails and malicious code, speeding up attacks, and exploiting vulnerabilities shows the AI system's use has directly contributed to these harms. The harms fall under harm to property and communities. The event is not merely a potential risk but describes ongoing and increasing AI-enabled criminal activity causing actual harm, fitting the definition of an AI Incident.
Thumbnail Image

CrowdStrike report finds AI-driven attacks up 89% in 2025 | Back End News

2026-02-24
Back End News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems by adversaries to conduct cyberattacks that have directly caused harm to organizations and individuals, including theft of data and cryptocurrency, exploitation of vulnerabilities, and disruption of cloud infrastructure. The AI systems' development and use have directly led to realized harms such as financial losses and security breaches. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing significant harm.
Thumbnail Image

CrowdStrike says AI is officially supercharging cyber attacks: Average breakout times hit just 29 minutes in 2025, 65% faster than in 2024 - and some attacks take just seconds

2026-02-24
IT Pro
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems are being exploited and misused by threat actors to conduct cyber attacks that have already caused harm, such as credential theft, ransomware deployment, and data exfiltration. The AI systems are both targets and tools in these attacks, with malicious prompt injections and AI-generated malware accelerating harmful outcomes. This constitutes an AI Incident because the AI system's use and misuse have directly led to realized harms including violations of security and privacy, disruption of enterprise operations, and harm to communities relying on secure digital infrastructure.
Thumbnail Image

AI Arms Race Shrinks Breakout Time to 29 Minutes as Adversaries Turn GenAI on the Enterprise

2026-02-24
IT Security Guru
Why's our monitor labelling this an incident or hazard?
The report explicitly details the use of AI-enabled tools by malicious actors to conduct cyberattacks that have already caused harm, including a $1.46 billion cryptocurrency theft and rapid lateral movement within compromised networks. It also documents direct attacks on AI systems themselves, which have led to credential and cryptocurrency theft. The harms described include financial loss, disruption of enterprise operations, and breaches of security, which align with harm to property and communities. The AI systems' development, use, and malfunction (including exploitation) are central to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are realized and directly linked to AI system use and attacks.
Thumbnail Image

Threat groups moving at record speeds, as AI helps scale attacks

2026-02-24
Cybersecurity Dive
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI systems being used by threat actors to automate and speed up malicious cyber operations, which have caused realized harm to organizations through data exfiltration and credential theft. The AI systems are integral to the attacks' success and have directly contributed to violations of security and privacy, which fall under harm to communities and property. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-enabled cyberattacks.
Thumbnail Image

Threat Actors Weaponized AI Tools to Gain Full Domain Access within 30 Minutes

2026-02-24
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., ChatGPT, Gemini, Claude, Qwen2.5-Coder-32B-Instruct) being used by threat actors to generate malicious scripts, perform reconnaissance, and automate attacks. These AI-enabled attacks have directly led to harms such as unauthorized access, data exfiltration attempts, and operational disruption. The involvement of AI in the development and use of these attack tools and the documented incidents of harm meet the criteria for an AI Incident. The harms are realized, not just potential, and the AI system's role is pivotal in enabling the rapid and sophisticated nature of these cyberattacks.
Thumbnail Image

Crowdstrike says cyber criminals are rapidly adopting AI for cyber crime

2026-02-25
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs, AI coding assistants, AI image manipulation) being used by threat actors to carry out cybercrime and disinformation campaigns that have already caused harm. The harms include espionage, misinformation affecting elections, and cyber attacks, which fall under violations of rights and harm to communities. The AI systems are central to these harms, either by generating convincing phishing emails, enabling malware operations, or creating fake personas and deepfakes. This direct involvement of AI in causing realized harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

CrowdStrike says attackers are moving through networks in under 30 minutes

2026-02-25
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems (specifically GenAI) are being used by attackers to accelerate harmful cyber intrusions that have already occurred, including data theft and ransomware deployment. These activities constitute harm to property and disruption of critical infrastructure (cloud services). The AI system's use in these attacks is a direct contributing factor to realized harm, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents ongoing malicious use of AI in cyberattacks causing actual damage.
Thumbnail Image

AI accelerates adversaries and reshapes the attack surface

2026-02-26
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI is leveraged by malicious actors to conduct and accelerate cyberattacks that have already caused harm, such as credential theft, ransomware deployment, and cryptocurrency theft. The AI systems are integral to these attacks, either as tools or targets, and the harms described (theft, intrusion, exploitation) fall under harm to property and communities. This meets the definition of an AI Incident because the AI system's use has directly led to realized harms, not just potential risks.
Thumbnail Image

2026 CrowdStrike Global Threat Report: AI accelerates adversaries, reshapes Attack Surface - Manila Standard

2026-02-26
Manila Standard
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI systems are being used by adversaries to conduct cyberattacks that have already caused significant harm, including a $1.46 billion cryptocurrency theft and rapid data breaches. The AI involvement is clear in accelerating attack speed and enabling new attack vectors such as malicious prompt injections into generative AI tools. The harms include financial loss, breach of security, and disruption of critical infrastructure components (cloud and SaaS applications). These meet the criteria for an AI Incident as the AI system's use has directly led to realized harms.
Thumbnail Image

2026 Crowdstrike global threat report

2026-02-26
Express Computer
Why's our monitor labelling this an incident or hazard?
The report explicitly details how AI systems are being exploited and weaponized by adversaries, leading to realized harms such as financial theft, data breaches, and operational disruptions. The involvement of AI in accelerating attacks and being targeted itself meets the criteria for an AI Incident, as the AI system's use and misuse have directly led to significant harms including harm to property, disruption of infrastructure, and violations of security. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La inteligencia artificial permite ejecutar ciberataques en solo 27 segundos

2026-02-25
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., large language models, AI-generated scripts) being used by cybercriminal groups to automate and accelerate attacks, resulting in actual breaches and data theft. These harms fall under the definition of AI Incident as the AI system's use has directly led to harm to property and communities. The detailed examples of attacks and their consequences confirm that harm has materialized, not just a potential risk. Hence, this is classified as an AI Incident.
Thumbnail Image

En tan solo 27 segundos te pueden vaciar la cuenta: la IA da alas a la ciberdelincuencia

2026-02-25
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by cybercriminal groups to conduct and accelerate cyberattacks that have already caused harm, such as rapid unauthorized access to corporate infrastructure, data theft, and ransomware deployment. These harms fall under property harm and disruption of critical infrastructure management. The AI involvement is clear and central to the incident, with concrete examples of AI-enabled malware and attack techniques. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La IA aumentó la efectividad de los delincuentes, alertan expertos

2026-02-25
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by cybercriminal groups to conduct attacks that have successfully breached corporate infrastructures, reduced response times, and stolen credentials. The harms include disruption of critical infrastructure and harm to property through unauthorized access and data theft. The AI systems' development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The detailed examples of AI-powered malware and AI-generated attack tools confirm the AI system involvement and realized harm.
Thumbnail Image

La IA acelera las capacidades de los ciberdelincuentes: el ataque...

2026-02-24
Notimérica
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by cybercriminals to conduct attacks that have already occurred, such as automated reconnaissance, credential theft, and ransomware deployment. These activities constitute direct harm to organizations and potentially to critical infrastructure, fitting the definition of an AI Incident. The involvement of AI is clear and central to the increased scale and speed of attacks, and the harms are realized, not just potential. Hence, this event qualifies as an AI Incident.
Thumbnail Image

La inteligencia artificial dispara velocidad global de ciberataques

2026-02-27
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, AI-generated scripts, AI-generated fake identities) being used by cybercriminal groups to conduct and accelerate cyberattacks that have already occurred, causing harm through unauthorized access, data theft, and disruption of infrastructure. The harms include violations of security and property, and the AI's role is pivotal in enabling these attacks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

[AI픽] AI 무장한 해커들..."사이버 공격 89% 폭증" | 연합뉴스

2026-03-16
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by malicious actors to conduct cyberattacks that have already resulted in data breaches and security compromises, which constitute harm to property and communities. The AI systems are actively weaponized and have directly contributed to realized harms, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article details actual harm caused by AI-enabled attacks, not just potential or future risks.
Thumbnail Image

"AI, 사이버 공격 속도 높이고 공격 표면 재편"

2026-03-16
bikorea.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used maliciously to conduct cyberattacks that have already caused harm, including data theft, ransomware, and financial losses. The AI systems are both tools for accelerating attacks and targets themselves, with concrete examples of harm occurring. This meets the definition of an AI Incident because the AI system's use has directly led to significant harm to property, organizations, and communities. The detailed description of realized attacks and their consequences confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

"AI, 해킹도 빨리빨리...데이터 유출 4분 만에 가능"

2026-03-16
디지털데일리
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in cyberattacks that have directly caused harm, including data breaches and large-scale financial theft. The involvement of AI in accelerating and automating these attacks is clear, and the harms (data loss, financial crime, insider attacks) are materialized. This fits the definition of an AI Incident because the AI system's use has directly led to harm to property, communities, and violations of rights. The article does not merely warn of potential harm but reports ongoing and realized harms caused by AI-enabled cybercrime.
Thumbnail Image

AI 무장한 해커들..."사이버 공격 89% 폭증" - 전파신문

2026-03-16
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by cybercriminal groups to conduct and accelerate cyberattacks, which have directly led to harms including data breaches and unauthorized access to critical systems. The AI's role is pivotal in enabling these attacks, fulfilling the criteria for an AI Incident under the definitions provided. The harms are realized and ongoing, not merely potential, and the AI involvement is explicit and central to the incident.
Thumbnail Image

[헬로티 HelloT] 크라우드스트라이크"AI 기반 사이버 공격 89% 급증...침입 시간 29분으로 단축

2026-03-17
hellot.net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being weaponized by attackers to conduct cyber intrusions that have directly led to data breaches, theft of virtual assets, and large-scale financial crimes. These constitute harms to property and communities. The AI involvement is clear in accelerating and enabling these attacks, fulfilling the criteria for an AI Incident as the harms are realized and directly linked to AI use in cyberattacks.