AI-Assisted Attack Breaches AWS Cloud in Under 10 Minutes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

On November 28, 2025, attackers used large language models (LLMs) to automate and accelerate a cyberattack on an Amazon Web Services (AWS) environment. Leveraging AI for reconnaissance, code generation, and privilege escalation, they achieved full administrative access in under 10 minutes, compromising cloud infrastructure and security.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (large language models) to assist in a cyberattack that directly led to harm, including unauthorized access to sensitive data and administrative control over cloud resources. This constitutes a violation of security and privacy rights, harm to property (data and cloud infrastructure), and disruption of cloud environment management. The AI system's role was central in automating and accelerating the attack, making this an AI Incident under the framework definitions.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

AWS intruder pulled off AI-assisted cloud break-in in 8 mins

2026-02-04
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to assist in a cyberattack that directly led to harm, including unauthorized access to sensitive data and administrative control over cloud resources. This constitutes a violation of security and privacy rights, harm to property (data and cloud infrastructure), and disruption of cloud environment management. The AI system's role was central in automating and accelerating the attack, making this an AI Incident under the framework definitions.
Thumbnail Image

8-Minute Access: AI Accelerates Breach of AWS Environment

2026-02-03
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLMs) used by attackers to conduct and accelerate a cyberattack, which directly led to harm including unauthorized access, data exfiltration, and resource abuse in a critical cloud infrastructure environment. The AI system's role was central to the attack's success and speed, fulfilling the criteria for an AI Incident as the AI system's use directly caused harm to property and cloud infrastructure. The event is not merely a potential risk or a complementary update but a realized harmful incident involving AI.
Thumbnail Image

AI-Assisted Cloud Intrusion Breaches AWS in 8 Minutes

2026-02-03
TechNadu
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to automate and accelerate a cloud intrusion attack, which directly caused unauthorized access and compromise of AWS cloud infrastructure. This constitutes harm to property and disruption of critical infrastructure management, fulfilling the criteria for an AI Incident. The detailed description of the attack's execution and its consequences confirms realized harm rather than potential harm, ruling out AI Hazard or Complementary Information classifications.
Thumbnail Image

AI-assisted cloud intrusion achieves admin access in 8 minutes | Sysdig

2026-02-03
Sysdig
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLMs) used by the threat actor to automate and accelerate the attack process, including generating malicious code and making decisions in real time. The attack led to direct harm by compromising cloud infrastructure, escalating privileges to administrative levels, and enabling unauthorized access and potential data theft or misuse. The involvement of AI in the attack's development and use, combined with the realized harm to property and security, meets the criteria for an AI Incident. The detailed description of the attack chain, the use of AI-generated code, and the resulting unauthorized access confirm the direct link between AI system use and harm.
Thumbnail Image

Hackers Using AI to Get AWS Admin Access Within 10 Minutes

2026-02-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLMs) used by attackers to conduct and accelerate a sophisticated cyberattack on AWS cloud infrastructure. The AI was instrumental in automating reconnaissance, code generation, and attack execution, which directly caused harm by compromising administrative accounts, creating backdoors, and enabling unauthorized resource usage. The harm includes disruption and compromise of critical infrastructure (cloud services), violation of security and privacy, and financial damage. The AI system's role is pivotal and directly linked to the incident's occurrence, meeting the criteria for an AI Incident.
Thumbnail Image

AWS Cloud Breach Achieved Admin Access In Record Time With Help From AI

2026-02-05
HotHardware
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI (LLMs) was used to facilitate and speed up a cyberattack that led to unauthorized administrative access to AWS cloud resources. This unauthorized access and lateral movement across accounts represent a violation of security and harm to property and operational integrity. The AI system's role was pivotal in the attack's success, making this an AI Incident under the framework's definition of harm (d) to property and communities. The harm is realized, not just potential, as the breach occurred and was documented.
Thumbnail Image

Attackers Used AI to Breach an AWS Environment in 8 Minutes

2026-02-06
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and LLMs by threat actors to accelerate and automate the cyberattack, including credential theft, privilege escalation, and lateral movement within the AWS environment. The AI system's involvement directly led to harm, including unauthorized access, data exfiltration, and potential financial damage. The incident is not merely a potential risk but a realized attack facilitated by AI, meeting the criteria for an AI Incident under the framework.