Claude Code Source Leak Exploited to Spread Credential-Stealing Malware

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A leak of Anthropic's Claude Code AI source code enabled cybercriminals to distribute malware disguised as the leaked code. Malicious repositories and archives, widely shared online, installed credential-stealing software (Vidar) and proxy tools (GhostSocks) on developers' systems, leading to data theft and network compromise. The incident primarily targeted developers and organizations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hackers Spread Vidar and GhostSocks Malware Through Claude Code Leak

2026-04-06
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).
Thumbnail Image

Be careful what you click - hackers use Claude Code leak to push malware

2026-04-03
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose leaked source code is being used maliciously to distribute malware, causing direct harm to users by stealing sensitive information and compromising devices. The presence of the AI system is explicit, and the harm is realized through the malware infections resulting from the malicious repositories. The article also references prior security vulnerabilities in the AI system, reinforcing the connection between the AI system's development/use and harm. Hence, this is an AI Incident due to direct harm caused by malicious use of the AI system's leaked code.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked AI system source code (Claude Code) is explicitly mentioned and is central to the incident. The leak indirectly led to harm through the malware campaign that exploits the leak to trick users into downloading malicious files. The malware causes harm to property (computers) and individuals (credential theft), fitting the definition of an AI Incident where the AI system's development/use/malfunction leads indirectly to harm. The event is not merely a potential risk but describes active harm occurring via the malware campaign. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked Claude Code is an AI system's source code. The leak led to widespread distribution and malicious actors exploiting it to distribute malware that steals sensitive data and enables remote access. This directly harms users who download the malicious files, fulfilling the criteria for an AI Incident due to realized harm caused by misuse of the AI system's leaked code. The event is not merely a potential hazard or complementary information but a concrete incident involving harm linked to the AI system.
Thumbnail Image

Hackers Turned Anthropic's Claude Code Leak into a Malware Lure

2026-04-07
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used maliciously to distribute malware that causes direct harm to users by stealing sensitive data and compromising their devices. The harm is realized, not just potential, and the AI system's development and accidental leak are pivotal in enabling this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From Accidental Leak to Attack Vector: How Claude Code's Source Exposure Became a Malware Distribution Pipeline

2026-04-04
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose accidental source code leak was exploited by attackers to distribute malware, causing direct harm to users by stealing credentials and compromising security. The involvement of the AI system's leaked code in enabling this attack and the realized harm to individuals and the community meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete incident with direct harm linked to the AI system's development and use context.
Thumbnail Image

Claude Code leak leveraged to distribute malware

2026-04-03
SC Media
Why's our monitor labelling this an incident or hazard?
The event describes a malicious campaign leveraging the purported leak of an AI system's source code to distribute malware. The AI system (Claude Code) is involved only as the lure or context for the attack, not as a cause of harm through its own operation or malfunction. The harm arises from the malicious use of the leaked code to distribute credential-stealing malware. This fits the definition of an AI Hazard because the development or leak of the AI system's code is plausibly leading to harm via malicious exploitation, but the AI system itself is not directly causing the harm. However, since harm (credential theft) is already occurring due to the malware distributed under the guise of the AI code, and the AI system's leaked code is pivotal in enabling this harm, this qualifies as an AI Incident. The AI system's involvement is indirect but pivotal in the chain of events leading to harm.
Thumbnail Image

Hackers Weaponize Claude Code Leak to Spread Vidar and GhostSocks Malware

2026-04-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code leak has been exploited by malicious actors to spread malware causing harm to individuals (developers) and organizations through credential theft and device compromise. The AI system's development (source code leak) and subsequent malicious use have directly led to realized harm, fulfilling the criteria for an AI Incident. The harm includes violations of security and privacy, which fall under harm to persons and communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Malware in Claude Code Leak: 5 Critical Facts 2026

2026-04-04
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used as a vector to distribute malware. The malware's use has directly led to harm by stealing credentials and enabling unauthorized access to cloud infrastructure, which is a violation of security and potentially intellectual property rights. The involvement of the AI system's leaked code is pivotal to the incident, as it exploits the AI community's interest in the code to spread malware. This meets the criteria for an AI Incident because the development and use of the AI system (its leaked code) has indirectly led to harm (credential theft, infrastructure compromise).
Thumbnail Image

How did the Claude Code leak enable malware?

2026-04-06
AllToc
Why's our monitor labelling this an incident or hazard?
The leaked AI-related code (Claude Code) is an AI system component, and its unauthorized distribution and weaponization by attackers have directly caused harm through malware infections and data theft. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm to persons and enterprises. The event is not merely a potential risk or a complementary update but describes realized harm from malicious use of AI system code.
Thumbnail Image

Anthropic Rilis Mythos, Model AI Keamanan Siber Terkuat untuk Saingi Cloude Opus

2026-04-08
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Mythos) used for cybersecurity vulnerability detection, which is a clear AI application. The AI system is in active use by partners to identify and mitigate security risks, thus contributing positively to preventing harm. There is no report of any harm caused by the AI system or its malfunction. The concerns about potential misuse are noted but remain hypothetical and do not describe an imminent or realized hazard. The main focus is on the release, capabilities, partnerships, and governance discussions around the AI system, which aligns with Complementary Information rather than an Incident or Hazard. Therefore, the event does not meet the criteria for AI Incident or AI Hazard but provides important context and updates on AI deployment and governance.
Thumbnail Image

Programmer Sekarang Jadi Pekerjaan yang Paling Rentan Digantikan AI

2026-04-08
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunctioned or was misused leading to harm. Instead, it provides a research-based assessment of potential future impacts of AI on employment, which is a plausible risk but not an incident or hazard in itself. Therefore, it fits best as Complementary Information, providing context and understanding about AI's societal implications without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Hacker Manfaatkan Kebocoran Claude AI, Sebar Malware Vidar Lewat GitHub Palsu

2026-04-07
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose leaked source code was exploited to spread malware. The harm includes theft of sensitive data and misuse of infected devices, which are direct harms to persons and communities. The AI system's development and subsequent leak indirectly caused these harms. The malicious use of the AI system's leaked code to distribute malware fits the definition of an AI Incident, as the AI system's malfunction or misuse led to injury or harm to people and communities.
Thumbnail Image

Kebocoran Model AI Anthropic Picu Kekhawatiran Keamanan Siber

2026-04-08
cf.febriyanto.io
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mythos) with advanced capabilities that could be exploited by threat actors to accelerate cyberattacks, indicating a plausible risk of harm to critical infrastructure or digital security. The leak of the model's internal documents increases the likelihood of such misuse. Since no actual harm has been reported yet but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The discussion of market impact and industry trends supports the seriousness of the hazard but does not indicate realized harm.
Thumbnail Image

Claude Mythos: AI Anthropic Temukan Ribuan Celah Keamanan

2026-04-08
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
Claude Mythos is an AI system explicitly described as capable of finding and exploiting security vulnerabilities at an unprecedented scale. Although the article does not report any realized harm or incidents caused by this AI, it highlights the credible risk of misuse leading to large-scale cyberattacks, which would disrupt critical infrastructure and harm communities. The proactive mitigation efforts by Anthropic and partners further indicate awareness of this plausible threat. Since no actual harm has yet occurred but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Apple, Google hingga Microsoft Bersatu Lawan Serangan Siber Berbasis AI lewat Project Glasswing

2026-04-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to detect cybersecurity vulnerabilities that, if exploited, could lead to significant harm such as disruption of critical infrastructure (harm category b). Although no actual harm has yet occurred, the AI system's use is intended to prevent such incidents. Therefore, the event describes a credible and plausible risk scenario where AI is central to identifying and mitigating potential cyber threats. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or, if misused or if vulnerabilities remain, causing an AI Incident. The article does not report any realized harm yet, so it is not an AI Incident. It is more than complementary information because it focuses on the AI system's role in addressing a credible cybersecurity threat rather than just providing updates or responses to past incidents.
Thumbnail Image

Ditendang Trump, Pencipta Senjata Canggih AS Kalah di Pengadilan

2026-04-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article describes a legal and political dispute over the use of an AI system in military projects, with the US government labeling Anthropic as a supply chain risk and banning its AI from Pentagon contracts. While the AI system is involved and the context is military and security-sensitive, there is no indication that the AI system has caused any harm, malfunction, or incident. The event is about governance, legal rulings, and control over AI technology, which fits the definition of Complementary Information. It does not meet the criteria for an AI Incident (no harm occurred) or AI Hazard (no plausible future harm is described beyond existing concerns).
Thumbnail Image

Raksasa Teknologi Tumbang, Investor Kompak Buang Saham

2026-04-10
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Mythos) whose development and potential capabilities have caused market fears and economic impacts. However, no realized harm or incident resulting from the AI system's use or malfunction is described. The concerns are about plausible future impacts on the software industry and cybersecurity, but no direct or indirect harm has occurred yet. Therefore, this event is best classified as an AI Hazard, reflecting the credible risk posed by the AI system's capabilities and potential future impacts on the industry and market.
Thumbnail Image

Anthropic siapkan model AI untuk tangkal serangan siber

2026-04-08
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude Mythos) in cybersecurity to prevent AI-based cyberattacks, which could plausibly lead to harms such as disruption of critical infrastructure or economic damage if successful attacks occur. Since the article discusses the initiative as a preventive measure and no actual harm or incident has occurred, this qualifies as an AI Hazard. It highlights a credible risk of AI-enabled cyberattacks and the use of AI to counteract them, but no realized harm is described.
Thumbnail Image

Elon Musk Dukung AI Anthropic Dilarang untuk Dipakai Perang

2026-04-08
detiki net
Why's our monitor labelling this an incident or hazard?
The article discusses the potential use of Anthropic's AI system Claude in military contexts and the US Department of Defense's designation of Anthropic as a supply chain risk, leading to a government ban on its military use. Elon Musk's support for this ban and the company's legal challenge highlight the controversy and risk associated with AI in warfare. Although no actual harm has been reported, the plausible future harm from AI-enabled military applications, such as autonomous weapons or surveillance, fits the definition of an AI Hazard. The event does not describe realized harm or incident but focuses on the credible risk and governance responses, making it an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Pejabat AS Bessent dan Powell Peringatkan Bank soal Risiko AI Baru Anthropic

2026-04-10
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The AI system "Mythos" is explicitly mentioned and is described as having the capability to exploit security vulnerabilities, which could plausibly lead to disruption of critical infrastructure (the banking sector's digital systems). No actual harm has yet occurred, but the credible risk of future harm is the focus. Therefore, this event qualifies as an AI Hazard because it involves the use and potential misuse of an AI system that could plausibly lead to significant harm, but no incident has yet materialized.
Thumbnail Image

Anthropic Rilis Project Glasswing, Tangkal AI dengan AI

2026-04-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the deployment of an AI system designed to prevent cybersecurity incidents by detecting vulnerabilities. The AI system's use is intended to reduce harm rather than cause it, and no actual harm or incident is reported. The event is about a collaborative effort and technological advancement to address AI-related security challenges, fitting the definition of Complementary Information. It does not describe an AI Incident or AI Hazard because no harm or plausible future harm from the AI system itself is described; instead, it is a positive application of AI to mitigate risks.
Thumbnail Image

Anthropic Bermitra dengan Apple untuk Perkuat Keamanan iOS, macOS, dan Safari

2026-04-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Mythos Preview) developed to detect software vulnerabilities, which is a clear AI system involvement. The AI system is being used to prevent potential harm related to cybersecurity breaches, which could lead to significant harm to users, property, and communities if exploited. Since the AI system is actively used to detect and mitigate vulnerabilities, and the article does not report any realized harm but rather the deployment of AI to prevent harm, this event qualifies as Complementary Information. It provides context on societal and technical responses to AI for cybersecurity enhancement rather than reporting an incident or hazard.