Zero-Click RCE Vulnerability in Claude Desktop Extensions Exposes 10,000+ Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A critical zero-click remote code execution vulnerability in Anthropic's Claude Desktop Extensions allows attackers to compromise over 10,000 users' systems via malicious Google Calendar events. The flaw stems from unsafe AI architecture granting extensions full system privileges without proper sandboxing. Anthropic has declined to fix the issue despite its severity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Desktop Extensions) that processes inputs from Google Calendar and autonomously executes commands via extensions with system-level access. The vulnerability allows attackers to send malicious calendar events that the AI will execute without user consent, causing direct harm through remote code execution (malware infection). This meets the definition of an AI Incident because the AI system's use and malfunction have directly led to harm to property (user systems) and pose significant security risks. The developer's refusal to fix the issue does not negate the realized harm potential. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Claude add-on turns Google Calendar into malware courier

2026-02-11
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Desktop Extensions) that processes inputs from Google Calendar and autonomously executes commands via extensions with system-level access. The vulnerability allows attackers to send malicious calendar events that the AI will execute without user consent, causing direct harm through remote code execution (malware infection). This meets the definition of an AI Incident because the AI system's use and malfunction have directly led to harm to property (user systems) and pose significant security risks. The developer's refusal to fix the issue does not negate the realized harm potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

10K Claude Desktop Users Exposed by Zero-Click Vulnerability

2026-02-10
TechRepublic
Why's our monitor labelling this an incident or hazard?
The Claude Desktop Extensions are AI systems that autonomously interpret user prompts and execute commands with high privileges. The described zero-click vulnerability allows attackers to exploit this AI behavior to run arbitrary code remotely, compromising users' systems without their knowledge or consent. This directly leads to harm in terms of unauthorized access, potential data loss, and system control, which fits the definition of an AI Incident involving harm to property and security. The article details the mechanism, impact, and severity of the flaw, confirming realized harm potential rather than mere plausible future harm, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

New Zero-Click Flaw in Claude Extensions, Anthropic Declines Fix

2026-02-09
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Desktop Extensions using the Claude LLM) whose malfunction or insecure design directly leads to a critical security vulnerability enabling remote code execution, which is a form of harm to property and potentially to users' systems. The AI system's autonomous chaining of tools and interpretation of inputs without proper security boundaries is the root cause. The harm is realized as the vulnerability exists and can be exploited, not just a theoretical risk. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and design flaws.
Thumbnail Image

Flaw in Anthropic Claude Extensions Can Lead to RCE in Google Calendar: LayerX

2026-02-09
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Desktop Extensions) whose autonomous decision-making and full system privileges enable a remote code execution vulnerability. This vulnerability has been demonstrated to allow attackers to compromise users' systems, which constitutes harm to property and potentially to user privacy and security. The harm is realized, not just potential, as the exploit has been demonstrated. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction (unsafe autonomous execution) and realized harm (system compromise).
Thumbnail Image

Critical 0-Click RCE Vulnerability in Claude Desktop Extensions Exposes 10,000+ Users to Remote Attacks

2026-02-09
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Desktop Extensions) whose autonomous decision-making logic and architecture directly lead to a severe security breach (remote code execution) affecting thousands of users. The harm is realized, not hypothetical, as attackers can compromise systems remotely without user consent. This meets the criteria for an AI Incident because the AI system's use and malfunction have directly caused significant harm to property and user security. The event is not merely a potential risk or a governance update but a concrete incident of harm caused by AI system design flaws.
Thumbnail Image

Claude desktop extension can be hijacked to send out malware by a simple Google Calendar event

2026-02-12
TechRadar
Why's our monitor labelling this an incident or hazard?
The Claude Desktop Extension is an AI system that autonomously processes user requests involving external data sources (Google Calendar) and executes commands on the user's device. The described vulnerability allows an attacker to exploit this AI system's behavior to cause remote code execution and malware installation, which is a direct harm to property and user security. Since the harm is realized or highly likely if exploited, and the AI system's malfunction or design flaw is pivotal to the attack, this event qualifies as an AI Incident under the definition of harm to property and system compromise caused directly by the AI system's use and malfunction.
Thumbnail Image

Claude extensions open a security hole in endpoints

2026-02-11
Computing
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Desktop Extensions) that processes inputs from Google Calendar and autonomously executes code on the local system. The vulnerability allows remote code execution, which is a direct harm to property (the user's computer system) and potentially to the user if the system is compromised. The AI system's use and design are directly linked to the harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the vulnerability can be exploited to compromise systems. Therefore, this is classified as an AI Incident.
Thumbnail Image

10,000+ Claude Desktop Users Exposed by Zero-Click Flaw

2026-02-11
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The Claude Desktop Extensions involve an AI system (Claude) that autonomously processes calendar data and executes commands with full system privileges. The zero-click exploit directly leads to remote code execution, compromising user systems and data, which is harm to property and user security. The AI's autonomous decision-making and lack of sandboxing are central to the incident. The harm is realized, not just potential, and the AI system's malfunction and design choices are pivotal. Hence, this is an AI Incident.
Thumbnail Image

LayerX reports vulnerability in Claude Desktop Extensions, Anthropic declines to fix

2026-02-12
SC Media
Why's our monitor labelling this an incident or hazard?
The Claude Desktop Extensions are part of an AI system that processes external data and autonomously decides actions, fitting the definition of an AI system. The identified vulnerability allows remote code execution, which can cause harm to property (user systems) and potentially to users if exploited. This harm is directly linked to the AI system's malfunction or design flaw. Therefore, this event qualifies as an AI Incident due to the realized security vulnerability that can lead to direct harm.
Thumbnail Image

A Google Calendar Invite Could Hijack Your AI Assistant: Inside the Alarming New Attack on Claude Desktop

2026-02-12
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude Desktop) that autonomously processes external data (Google Calendar events) and executes commands. The vulnerability allows attackers to inject malicious prompts that cause the AI to perform harmful actions, including data theft and malware distribution, which are direct harms to individuals and their property (data and devices). The attack has been demonstrated practically, not just theoretically, confirming realized harm potential. Therefore, this is an AI Incident as the AI system's use and malfunction directly lead to significant harm. The article does not merely discuss potential future risks or responses but reports on an actual exploit and its consequences.
Thumbnail Image

How was Claude Desktop hijacked?

2026-02-12
AllToc
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Desktop extension) and details a prompt-injection vulnerability that can be triggered remotely without user interaction (zero-click). The vulnerability allows malicious input to be interpreted as commands by the AI assistant, which can lead to unauthorized actions. While no confirmed incidents of harm are reported, the credible risk of exploitation and resulting harm (e.g., unauthorized actions, potential breaches) meets the criteria for an AI Hazard. The article calls for mitigation and treats the issue as a high-risk concern, reinforcing the plausible future harm classification.