Critical Vulnerabilities in Anthropic's Claude Code Expose Developers to Remote Code Execution and API Key Theft

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers discovered critical vulnerabilities in Anthropic's AI-powered Claude Code, allowing attackers to execute remote code and steal API keys via malicious repository configurations. Exploitation could compromise developer machines and enterprise resources. Anthropic has since patched the flaws, but the incident highlights new AI-driven supply chain security risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Code) whose design and use directly led to realized harms: remote code execution on users' machines and theft of API keys. These harms affect property and data security, fitting harm categories (d) and (e). The vulnerabilities arise from the AI system's use and design, not just potential future harm, and actual exploitation was demonstrated by researchers. Anthropic's fixes and CVEs confirm the severity and reality of the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital security

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Claude's collaboration tools allowed remote code execution

2026-02-26
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose design and use directly led to realized harms: remote code execution on users' machines and theft of API keys. These harms affect property and data security, fitting harm categories (d) and (e). The vulnerabilities arise from the AI system's use and design, not just potential future harm, and actual exploitation was demonstrated by researchers. Anthropic's fixes and CVEs confirm the severity and reality of the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Flaws in Claude Code Put Developers' Machines at Risk

2026-02-25
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) used by developers for coding tasks. The vulnerabilities stem from the AI system's malfunction or design flaws that allow malicious commands to execute without user consent, leading to direct harm such as machine takeover and credential theft. These harms fall under violations of rights and harm to property. The incident has already occurred and been exploited in demonstrations, confirming realized harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Caught in the Hook: RCE and API Token Exfiltration Through Claude Code Project Files | CVE-2025-59536 | - Check Point Research

2026-02-25
Check Point Research
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, Claude Code, which is an AI-powered command-line development tool. The vulnerabilities stem from the use and configuration of this AI system, leading to remote code execution and API key theft, which are direct harms to users' security and privacy. The harm includes unauthorized access to developer machines and shared workspace data, as well as potential financial harm through billing fraud. These harms fall under injury to persons (security breach), harm to property (unauthorized access and data theft), and violation of rights (privacy and security). The event is not merely a potential risk but a realized incident with demonstrated exploitation and harm, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Check Point Researchers Expose Critical Claude Code Flaws - IT Security News

2026-02-25
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Code) with identified critical vulnerabilities that can be exploited to cause harm such as unauthorized access, data theft, and potential damage to enterprise resources. The exploitation of these vulnerabilities directly leads to harm (property and enterprise resource harm), fulfilling the criteria for an AI Incident. The article details realized security flaws and their consequences rather than just potential risks or general information, so it is not merely a hazard or complementary information.
Thumbnail Image

Check Point Research Reveals Critical Claude Code Vulnerabilities Enabling Remote Code Execution and API Key Theft - InfotechLead

2026-02-25
InfotechLead
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) used in AI development workflows. The vulnerabilities allow malicious code execution and API key theft, which are direct harms to enterprise security and property. The AI system's design and use led to these security flaws, fulfilling the criteria for an AI Incident. The article also mentions remediation, but the primary focus is on the realized vulnerabilities and their impact, not just the response, so it is not merely Complementary Information.