Google Detects First AI-Developed Zero-Day Exploit in Major Cyberattack Attempt

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Threat Intelligence Group identified hackers using generative AI, including large language models, to develop zero-day exploits targeting two-factor authentication systems. The AI-enabled attack, intended for mass exploitation, was proactively detected and stopped, highlighting the growing use of AI in sophisticated global cyber threats.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used by hackers to plan and attempt exploitation of zero-day vulnerabilities, which could lead to significant harm if successful. The Google Threat Intelligence Group's intervention prevented the attack, indicating the AI's role in a real and imminent threat scenario. This fits the definition of an AI Incident because the AI system's use has directly led to a harmful event (attempted cyberattack) that was only averted through intervention. The harm category includes disruption of critical infrastructure and harm to organizations. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
SafetyAccountability

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Satu Dunia Nyaris Ambruk, Peneliti Google Ungkap Fakta Mengerikan

2026-05-12
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by hackers to plan and attempt exploitation of zero-day vulnerabilities, which could lead to significant harm if successful. The Google Threat Intelligence Group's intervention prevented the attack, indicating the AI's role in a real and imminent threat scenario. This fits the definition of an AI Incident because the AI system's use has directly led to a harmful event (attempted cyberattack) that was only averted through intervention. The harm category includes disruption of critical infrastructure and harm to organizations. Therefore, this is classified as an AI Incident.
Thumbnail Image

Serangan Siber Berbasis AI Melonjak Jadi Ancaman Industri Global: Google Soroti Eskalasi Cepat dan Aktor Negara

2026-05-12
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to conduct cyberattacks that have already escalated to an industrial scale, involving exploitation of software vulnerabilities and malware development. These attacks cause harm to critical infrastructure and violate security, fulfilling the criteria for harm under AI Incident definition (b). The AI systems' use is not hypothetical or potential but ongoing and causing real harm, including by state-backed actors. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ngeri! Google Ungkap AI Jadi Senjata Hacker Bobol Sistem Tersembunyi

2026-05-12
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models and other AI technologies) being used to develop zero-day exploits, which are highly dangerous cybersecurity vulnerabilities. The exploit targeted bypassing two-factor authentication, a critical security control, which if successful would have led to harm such as unauthorized access, data breaches, and disruption. Although the attack was stopped early, the AI-enabled development of such exploits constitutes direct involvement of AI in causing or enabling harm. This fits the definition of an AI Incident because the AI system's use has directly led to a significant cybersecurity threat with potential harm to property and security. The event is not merely a potential hazard or complementary information but a concrete incident involving AI-enabled malicious activity.
Thumbnail Image

AI Kini Disalahgunakan untuk Serangan Siber Skala Besar

2026-05-12
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by hackers to find dangerous zero-day vulnerabilities and plan mass cyberattacks, which could disrupt critical infrastructure and compromise security. While Google prevented the attack before it was carried out, the AI's role in enabling these planned attacks is clear. Since no actual harm occurred yet but the risk is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident. The report also discusses ongoing research and defensive measures, which supports the assessment of a plausible future harm scenario.
Thumbnail Image

Google Ungkap Ancaman Baru, Hacker Pakai AI untuk Cari Celah Keamanan

2026-05-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models, including large language models, were used by hackers to develop zero-day exploits that bypass security measures, leading to potential unauthorized access and data theft. This constitutes harm to property and user data, fulfilling the criteria for an AI Incident. The attack was detected and mitigated before mass exploitation, but the AI's role in enabling the attack is clear and direct. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waspada! Peretas Mulai Gunakan AI untuk Retas Sistem Keamanan

2026-05-12
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by threat actors to develop zero-day exploits that can bypass security measures, which constitutes a direct use of AI leading to potential harm (security breaches, unauthorized access). The attack was detected and stopped, but the AI-enabled exploit development is a realized event, not just a potential risk. This fits the definition of an AI Incident because the AI system's use has directly led to a significant cybersecurity threat and harm potential. The event is not merely a warning or future risk (AI Hazard), nor is it a general update or response (Complementary Information).