Claude Code Source Leak Exploited to Spread Credential-Stealing Malware

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A leak of Anthropic's Claude Code AI source code enabled cybercriminals to distribute malware disguised as the leaked code. Malicious repositories and archives, widely shared online, installed credential-stealing software (Vidar) and proxy tools (GhostSocks) on developers' systems, leading to data theft and network compromise. The incident primarily targeted developers and organizations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hackers Spread Vidar and GhostSocks Malware Through Claude Code Leak

2026-04-06
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).
Thumbnail Image

Be careful what you click - hackers use Claude Code leak to push malware

2026-04-03
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose leaked source code is being used maliciously to distribute malware, causing direct harm to users by stealing sensitive information and compromising devices. The presence of the AI system is explicit, and the harm is realized through the malware infections resulting from the malicious repositories. The article also references prior security vulnerabilities in the AI system, reinforcing the connection between the AI system's development/use and harm. Hence, this is an AI Incident due to direct harm caused by malicious use of the AI system's leaked code.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked AI system source code (Claude Code) is explicitly mentioned and is central to the incident. The leak indirectly led to harm through the malware campaign that exploits the leak to trick users into downloading malicious files. The malware causes harm to property (computers) and individuals (credential theft), fitting the definition of an AI Incident where the AI system's development/use/malfunction leads indirectly to harm. The event is not merely a potential risk but describes active harm occurring via the malware campaign. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked Claude Code is an AI system's source code. The leak led to widespread distribution and malicious actors exploiting it to distribute malware that steals sensitive data and enables remote access. This directly harms users who download the malicious files, fulfilling the criteria for an AI Incident due to realized harm caused by misuse of the AI system's leaked code. The event is not merely a potential hazard or complementary information but a concrete incident involving harm linked to the AI system.
Thumbnail Image

Hackers Turned Anthropic's Claude Code Leak into a Malware Lure

2026-04-07
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used maliciously to distribute malware that causes direct harm to users by stealing sensitive data and compromising their devices. The harm is realized, not just potential, and the AI system's development and accidental leak are pivotal in enabling this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From Accidental Leak to Attack Vector: How Claude Code's Source Exposure Became a Malware Distribution Pipeline

2026-04-04
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose accidental source code leak was exploited by attackers to distribute malware, causing direct harm to users by stealing credentials and compromising security. The involvement of the AI system's leaked code in enabling this attack and the realized harm to individuals and the community meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete incident with direct harm linked to the AI system's development and use context.
Thumbnail Image

Claude Code leak leveraged to distribute malware

2026-04-03
SC Media
Why's our monitor labelling this an incident or hazard?
The event describes a malicious campaign leveraging the purported leak of an AI system's source code to distribute malware. The AI system (Claude Code) is involved only as the lure or context for the attack, not as a cause of harm through its own operation or malfunction. The harm arises from the malicious use of the leaked code to distribute credential-stealing malware. This fits the definition of an AI Hazard because the development or leak of the AI system's code is plausibly leading to harm via malicious exploitation, but the AI system itself is not directly causing the harm. However, since harm (credential theft) is already occurring due to the malware distributed under the guise of the AI code, and the AI system's leaked code is pivotal in enabling this harm, this qualifies as an AI Incident. The AI system's involvement is indirect but pivotal in the chain of events leading to harm.
Thumbnail Image

Hackers Weaponize Claude Code Leak to Spread Vidar and GhostSocks Malware

2026-04-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code leak has been exploited by malicious actors to spread malware causing harm to individuals (developers) and organizations through credential theft and device compromise. The AI system's development (source code leak) and subsequent malicious use have directly led to realized harm, fulfilling the criteria for an AI Incident. The harm includes violations of security and privacy, which fall under harm to persons and communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Malware in Claude Code Leak: 5 Critical Facts 2026

2026-04-04
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used as a vector to distribute malware. The malware's use has directly led to harm by stealing credentials and enabling unauthorized access to cloud infrastructure, which is a violation of security and potentially intellectual property rights. The involvement of the AI system's leaked code is pivotal to the incident, as it exploits the AI community's interest in the code to spread malware. This meets the criteria for an AI Incident because the development and use of the AI system (its leaked code) has indirectly led to harm (credential theft, infrastructure compromise).
Thumbnail Image

How did the Claude Code leak enable malware?

2026-04-06
AllToc
Why's our monitor labelling this an incident or hazard?
The leaked AI-related code (Claude Code) is an AI system component, and its unauthorized distribution and weaponization by attackers have directly caused harm through malware infections and data theft. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm to persons and enterprises. The event is not merely a potential risk or a complementary update but describes realized harm from malicious use of AI system code.