Claude Code Source Leak Exploited to Spread Credential-Stealing Malware

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A leak of Anthropic's Claude Code AI source code enabled cybercriminals to distribute malware disguised as the leaked code. Malicious repositories and archives, widely shared online, installed credential-stealing software (Vidar) and proxy tools (GhostSocks) on developers' systems, leading to data theft and network compromise. The incident primarily targeted developers and organizations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hackers Spread Vidar and GhostSocks Malware Through Claude Code Leak

2026-04-06
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked due to a packaging error. Hackers weaponized this leak to spread malware via fake repositories impersonating the AI codebase. The malware steals credentials and proxies network traffic, causing harm to developers and organizations. This constitutes an AI Incident because the AI system's development and its leaked code directly facilitated the malicious campaign leading to realized harm (credential theft and network compromise).
Thumbnail Image

Be careful what you click - hackers use Claude Code leak to push malware

2026-04-03
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose leaked source code is being used maliciously to distribute malware, causing direct harm to users by stealing sensitive information and compromising devices. The presence of the AI system is explicit, and the harm is realized through the malware infections resulting from the malicious repositories. The article also references prior security vulnerabilities in the AI system, reinforcing the connection between the AI system's development/use and harm. Hence, this is an AI Incident due to direct harm caused by malicious use of the AI system's leaked code.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked AI system source code (Claude Code) is explicitly mentioned and is central to the incident. The leak indirectly led to harm through the malware campaign that exploits the leak to trick users into downloading malicious files. The malware causes harm to property (computers) and individuals (credential theft), fitting the definition of an AI Incident where the AI system's development/use/malfunction leads indirectly to harm. The event is not merely a potential risk but describes active harm occurring via the malware campaign. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Anthropic Claude Code Leak Triggers Malware Campaign on GitHub

2026-04-03
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The leaked Claude Code is an AI system's source code. The leak led to widespread distribution and malicious actors exploiting it to distribute malware that steals sensitive data and enables remote access. This directly harms users who download the malicious files, fulfilling the criteria for an AI Incident due to realized harm caused by misuse of the AI system's leaked code. The event is not merely a potential hazard or complementary information but a concrete incident involving harm linked to the AI system.
Thumbnail Image

Hackers Turned Anthropic's Claude Code Leak into a Malware Lure

2026-04-07
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used maliciously to distribute malware that causes direct harm to users by stealing sensitive data and compromising their devices. The harm is realized, not just potential, and the AI system's development and accidental leak are pivotal in enabling this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From Accidental Leak to Attack Vector: How Claude Code's Source Exposure Became a Malware Distribution Pipeline

2026-04-04
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose accidental source code leak was exploited by attackers to distribute malware, causing direct harm to users by stealing credentials and compromising security. The involvement of the AI system's leaked code in enabling this attack and the realized harm to individuals and the community meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete incident with direct harm linked to the AI system's development and use context.
Thumbnail Image

Claude Code leak leveraged to distribute malware

2026-04-03
SC Media
Why's our monitor labelling this an incident or hazard?
The event describes a malicious campaign leveraging the purported leak of an AI system's source code to distribute malware. The AI system (Claude Code) is involved only as the lure or context for the attack, not as a cause of harm through its own operation or malfunction. The harm arises from the malicious use of the leaked code to distribute credential-stealing malware. This fits the definition of an AI Hazard because the development or leak of the AI system's code is plausibly leading to harm via malicious exploitation, but the AI system itself is not directly causing the harm. However, since harm (credential theft) is already occurring due to the malware distributed under the guise of the AI code, and the AI system's leaked code is pivotal in enabling this harm, this qualifies as an AI Incident. The AI system's involvement is indirect but pivotal in the chain of events leading to harm.
Thumbnail Image

Hackers Weaponize Claude Code Leak to Spread Vidar and GhostSocks Malware

2026-04-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code leak has been exploited by malicious actors to spread malware causing harm to individuals (developers) and organizations through credential theft and device compromise. The AI system's development (source code leak) and subsequent malicious use have directly led to realized harm, fulfilling the criteria for an AI Incident. The harm includes violations of security and privacy, which fall under harm to persons and communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Malware in Claude Code Leak: 5 Critical Facts 2026

2026-04-04
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose leaked source code is being used as a vector to distribute malware. The malware's use has directly led to harm by stealing credentials and enabling unauthorized access to cloud infrastructure, which is a violation of security and potentially intellectual property rights. The involvement of the AI system's leaked code is pivotal to the incident, as it exploits the AI community's interest in the code to spread malware. This meets the criteria for an AI Incident because the development and use of the AI system (its leaked code) has indirectly led to harm (credential theft, infrastructure compromise).
Thumbnail Image

How did the Claude Code leak enable malware?

2026-04-06
AllToc
Why's our monitor labelling this an incident or hazard?
The leaked AI-related code (Claude Code) is an AI system component, and its unauthorized distribution and weaponization by attackers have directly caused harm through malware infections and data theft. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm to persons and enterprises. The event is not merely a potential risk or a complementary update but describes realized harm from malicious use of AI system code.
Thumbnail Image

Anthropic Rilis Mythos, Model AI Keamanan Siber Terkuat untuk Saingi Cloude Opus

2026-04-08
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Mythos) used for cybersecurity vulnerability detection, which is a clear AI application. The AI system is in active use by partners to identify and mitigate security risks, thus contributing positively to preventing harm. There is no report of any harm caused by the AI system or its malfunction. The concerns about potential misuse are noted but remain hypothetical and do not describe an imminent or realized hazard. The main focus is on the release, capabilities, partnerships, and governance discussions around the AI system, which aligns with Complementary Information rather than an Incident or Hazard. Therefore, the event does not meet the criteria for AI Incident or AI Hazard but provides important context and updates on AI deployment and governance.
Thumbnail Image

Programmer Sekarang Jadi Pekerjaan yang Paling Rentan Digantikan AI

2026-04-08
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunctioned or was misused leading to harm. Instead, it provides a research-based assessment of potential future impacts of AI on employment, which is a plausible risk but not an incident or hazard in itself. Therefore, it fits best as Complementary Information, providing context and understanding about AI's societal implications without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Hacker Manfaatkan Kebocoran Claude AI, Sebar Malware Vidar Lewat GitHub Palsu

2026-04-07
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose leaked source code was exploited to spread malware. The harm includes theft of sensitive data and misuse of infected devices, which are direct harms to persons and communities. The AI system's development and subsequent leak indirectly caused these harms. The malicious use of the AI system's leaked code to distribute malware fits the definition of an AI Incident, as the AI system's malfunction or misuse led to injury or harm to people and communities.
Thumbnail Image

Kebocoran Model AI Anthropic Picu Kekhawatiran Keamanan Siber

2026-04-08
cf.febriyanto.io
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mythos) with advanced capabilities that could be exploited by threat actors to accelerate cyberattacks, indicating a plausible risk of harm to critical infrastructure or digital security. The leak of the model's internal documents increases the likelihood of such misuse. Since no actual harm has been reported yet but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The discussion of market impact and industry trends supports the seriousness of the hazard but does not indicate realized harm.
Thumbnail Image

Claude Mythos: AI Anthropic Temukan Ribuan Celah Keamanan

2026-04-08
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
Claude Mythos is an AI system explicitly described as capable of finding and exploiting security vulnerabilities at an unprecedented scale. Although the article does not report any realized harm or incidents caused by this AI, it highlights the credible risk of misuse leading to large-scale cyberattacks, which would disrupt critical infrastructure and harm communities. The proactive mitigation efforts by Anthropic and partners further indicate awareness of this plausible threat. Since no actual harm has yet occurred but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Apple, Google hingga Microsoft Bersatu Lawan Serangan Siber Berbasis AI lewat Project Glasswing

2026-04-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to detect cybersecurity vulnerabilities that, if exploited, could lead to significant harm such as disruption of critical infrastructure (harm category b). Although no actual harm has yet occurred, the AI system's use is intended to prevent such incidents. Therefore, the event describes a credible and plausible risk scenario where AI is central to identifying and mitigating potential cyber threats. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or, if misused or if vulnerabilities remain, causing an AI Incident. The article does not report any realized harm yet, so it is not an AI Incident. It is more than complementary information because it focuses on the AI system's role in addressing a credible cybersecurity threat rather than just providing updates or responses to past incidents.
Thumbnail Image

Ditendang Trump, Pencipta Senjata Canggih AS Kalah di Pengadilan

2026-04-09
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article describes a legal and political dispute over the use of an AI system in military projects, with the US government labeling Anthropic as a supply chain risk and banning its AI from Pentagon contracts. While the AI system is involved and the context is military and security-sensitive, there is no indication that the AI system has caused any harm, malfunction, or incident. The event is about governance, legal rulings, and control over AI technology, which fits the definition of Complementary Information. It does not meet the criteria for an AI Incident (no harm occurred) or AI Hazard (no plausible future harm is described beyond existing concerns).
Thumbnail Image

Raksasa Teknologi Tumbang, Investor Kompak Buang Saham

2026-04-10
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Mythos) whose development and potential capabilities have caused market fears and economic impacts. However, no realized harm or incident resulting from the AI system's use or malfunction is described. The concerns are about plausible future impacts on the software industry and cybersecurity, but no direct or indirect harm has occurred yet. Therefore, this event is best classified as an AI Hazard, reflecting the credible risk posed by the AI system's capabilities and potential future impacts on the industry and market.
Thumbnail Image

Anthropic siapkan model AI untuk tangkal serangan siber

2026-04-08
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude Mythos) in cybersecurity to prevent AI-based cyberattacks, which could plausibly lead to harms such as disruption of critical infrastructure or economic damage if successful attacks occur. Since the article discusses the initiative as a preventive measure and no actual harm or incident has occurred, this qualifies as an AI Hazard. It highlights a credible risk of AI-enabled cyberattacks and the use of AI to counteract them, but no realized harm is described.
Thumbnail Image

Elon Musk Dukung AI Anthropic Dilarang untuk Dipakai Perang

2026-04-08
detiki net
Why's our monitor labelling this an incident or hazard?
The article discusses the potential use of Anthropic's AI system Claude in military contexts and the US Department of Defense's designation of Anthropic as a supply chain risk, leading to a government ban on its military use. Elon Musk's support for this ban and the company's legal challenge highlight the controversy and risk associated with AI in warfare. Although no actual harm has been reported, the plausible future harm from AI-enabled military applications, such as autonomous weapons or surveillance, fits the definition of an AI Hazard. The event does not describe realized harm or incident but focuses on the credible risk and governance responses, making it an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Pejabat AS Bessent dan Powell Peringatkan Bank soal Risiko AI Baru Anthropic

2026-04-10
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The AI system "Mythos" is explicitly mentioned and is described as having the capability to exploit security vulnerabilities, which could plausibly lead to disruption of critical infrastructure (the banking sector's digital systems). No actual harm has yet occurred, but the credible risk of future harm is the focus. Therefore, this event qualifies as an AI Hazard because it involves the use and potential misuse of an AI system that could plausibly lead to significant harm, but no incident has yet materialized.
Thumbnail Image

Anthropic Rilis Project Glasswing, Tangkal AI dengan AI

2026-04-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the deployment of an AI system designed to prevent cybersecurity incidents by detecting vulnerabilities. The AI system's use is intended to reduce harm rather than cause it, and no actual harm or incident is reported. The event is about a collaborative effort and technological advancement to address AI-related security challenges, fitting the definition of Complementary Information. It does not describe an AI Incident or AI Hazard because no harm or plausible future harm from the AI system itself is described; instead, it is a positive application of AI to mitigate risks.
Thumbnail Image

Anthropic Bermitra dengan Apple untuk Perkuat Keamanan iOS, macOS, dan Safari

2026-04-09
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Mythos Preview) developed to detect software vulnerabilities, which is a clear AI system involvement. The AI system is being used to prevent potential harm related to cybersecurity breaches, which could lead to significant harm to users, property, and communities if exploited. Since the AI system is actively used to detect and mitigate vulnerabilities, and the article does not report any realized harm but rather the deployment of AI to prevent harm, this event qualifies as Complementary Information. It provides context on societal and technical responses to AI for cybersecurity enhancement rather than reporting an incident or hazard.
Thumbnail Image

Bank-bank Diminta Waspada, Ada Ancaman Baru di Amerika

2026-04-10
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Anthropic's Mythos model) with advanced capabilities to find and exploit cybersecurity weaknesses. The involvement is in the use and potential misuse of this AI system. No direct harm has occurred yet, but the warnings from government officials and the banking sector indicate a credible risk of future cyberattacks that could disrupt critical infrastructure (banks). This fits the definition of an AI Hazard, as the AI system's development and potential use could plausibly lead to an AI Incident involving harm to critical infrastructure. The article focuses on the risk and preventive discussions rather than reporting an actual incident.
Thumbnail Image

Jerome Powell Mengingatkan CEO Bank Terkait Risiko Model Anthropic

2026-04-10
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Mythos model) and discusses concerns about cybersecurity vulnerabilities that could affect critical financial infrastructure. Although no actual harm or incident is reported, the warnings and proactive briefings indicate a credible risk that the AI system could plausibly lead to disruption or harm if exploited. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm from the AI system's use or misuse, but no realized harm is described.
Thumbnail Image

AS Panggil Bos-bos Bank, Model AI Anthropic Bikin Regulator Ketar-ketir

2026-04-10
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Mythos) with advanced capabilities that could be used maliciously to exploit cybersecurity vulnerabilities. The concern is about potential future harm to critical infrastructure (the financial system) through cyberattacks enabled by this AI. No actual incident or harm has occurred yet, but the regulators' urgent response and the described capabilities establish a plausible risk of harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Menkeu AS & Gubernur Bank Sentral Kumpulkan Bos-bos Bank, Ada Apa?

2026-04-11
detik Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Anthropic's Mythos) and discusses concerns about its potential to be exploited by hackers, which could threaten the financial system's security. No actual harm or incident has occurred yet, but the credible risk of cybersecurity threats to critical infrastructure is recognized by top financial and government leaders. This fits the definition of an AI Hazard, as the AI system's use or misuse could plausibly lead to harm, but no direct or indirect harm has materialized so far. The meeting and discussions are preventive and risk-focused, not reporting an incident or realized harm.
Thumbnail Image

Anthropic誤洩50萬行程式碼 Claude Code原始碼外流

2026-04-01
工商時報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development process (source code management). The leak was accidental but exposes critical internal details that could be exploited or misused, posing plausible future risks such as intellectual property violations and enabling malicious actors to bypass protections or replicate the system. No direct harm (such as injury, rights violations, or operational disruption) has been reported yet, so it does not qualify as an AI Incident. It is not merely complementary information because the leak itself is a significant event with potential for harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

AI巨头Anthropic闹乌龙 误泄51万行代码 | 美国 | 人工智能公司 | 泄露

2026-04-02
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, Anthropic's Claude Code, which is an AI programming collaboration tool. The leak was caused by a development and deployment error (packaging mistake) leading to the exposure of proprietary internal source code. While no direct harm (such as injury or rights violations) has been reported yet, the leaked code contains security mechanisms and operational logic that, if exploited, could lead to significant harms including security breaches, misuse of the AI system, and erosion of user trust. The widespread dissemination and analysis of the code by competitors and potentially malicious actors increase the plausibility of future harm. Since the harm is not yet realized but plausibly could occur, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic因內部作業疏失外洩Claude Code程式碼

2026-04-01
iThome Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its source code being accidentally leaked due to internal operational mistakes. The leak exposes proprietary AI system details, which is related to the AI system's development and use. Although no direct harm such as data breaches involving customer information or operational failures causing injury or rights violations is reported, the exposure of internal source code creates a plausible risk of intellectual property theft or misuse that could lead to future harms. Since no actual harm has been reported, but plausible future harm exists, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system and potential risks.
Thumbnail Image

Anthropic不慎泄露编程应用Claude Code源代码 暂未涉大模型

2026-04-01
caixin.com
Why's our monitor labelling this an incident or hazard?
An AI system (Claude Code, an AI programming application) is explicitly involved. The event stems from the development and deployment process (a packaging error leading to source code leakage). Although no direct harm has been reported, the leakage of source code poses a credible risk of future harm, such as intellectual property violations or malicious exploitation. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is plausible but not yet realized.
Thumbnail Image

惹众怒!Anthropic 疯狂删库,他连夜爆改 Claude Code 源码拿下史上最快 10 万星

2026-04-02
爱范儿
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically the Claude AI system and its harness software. The leak was caused by a development error (packaging mistake) and led to unauthorized access and distribution of proprietary AI-related code. This constitutes a violation of intellectual property rights, a recognized form of harm under the AI Incident definition. The incident also caused reputational harm and operational disruption through widespread DMCA takedown actions affecting many developers, which further supports classification as an AI Incident. The involvement of AI is explicit, and the harm is realized, not merely potential, thus it is not a hazard or complementary information. The event is not unrelated as it centers on AI system code and its consequences.
Thumbnail Image

不开玩笑,Claude Code源码泄露,50万行代码被扒光

2026-03-31
爱范儿
Why's our monitor labelling this an incident or hazard?
An AI system (Claude Code) is explicitly involved, and its source code has been leaked publicly due to a development and operational security failure. Although no immediate harm has been reported, the leak exposes sensitive AI system details that could be exploited maliciously, posing a credible risk of future harm. The event does not describe realized harm but a plausible future risk, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its security breach are central to the event.
Thumbnail Image

早报|Claude Code 50万行代码「被开源」/OpenAI最大融资落地,估值接近万亿/招行董事长:员工很少准点下班,企业文化是最大护城河

2026-04-01
爱范儿
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked, revealing unreleased AI features. The leak stems from a development error. Although the exposure could plausibly lead to misuse or harm in the future (e.g., exploitation of unreleased AI capabilities), the article does not report any actual harm occurring yet. Hence, it does not meet the criteria for an AI Incident but fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm due to the leak. It is not Complementary Information because it is not an update or response to a prior incident but a new event. It is not Unrelated because it clearly involves an AI system and potential risks.
Thumbnail Image

Claude Code逾51万行源码遭泄露,Anthropic回应:系人为错误非安全漏洞 2026-04-01 11:36

2026-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) source code leak, which is a direct result of the AI system's development and deployment process. Although no direct harm to users or data breach occurred, the leak of internal code and architecture details could plausibly lead to intellectual property violations or other harms in the future. Since no realized harm is reported, it does not qualify as an AI Incident. It is not merely complementary information because the leak itself is a significant event with potential for harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

51万行Claude Code源代码泄露,电子宠物和AI助手遭提前曝光,开发者"抄作业"恐面临法律风险

2026-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked, exposing internal AI system details and unreleased AI features. The leak was caused by human error in the development and deployment process. The leak has already led to widespread unauthorized access and replication attempts, which constitute a violation of intellectual property rights and trade secrets, a recognized form of harm under the AI Incident definition (c). Although no direct physical harm or personal data breach occurred, the leak's impact on rights and potential legal consequences for developers using the code commercially is significant. This meets the criteria for an AI Incident because the AI system's development and deployment directly led to a breach of intellectual property rights and potential legal harms. The event is not merely a potential risk (hazard) or a complementary update; it is a realized incident involving AI system misuse and harm.
Thumbnail Image

奇客Solidot | Anthropic 以版权侵犯为由要求删除上万份 Claude Code 源代码副本

2026-04-02
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked and is being widely distributed without authorization. Anthropic's legal actions to remove these copies indicate a breach of intellectual property rights. The unauthorized dissemination and use of the AI source code directly violate legal protections intended to safeguard intellectual property, fitting the definition of an AI Incident under category (c) violations of intellectual property rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Anthropic做了自己的OpenClaw,Computer use正式进入Claude Code-钛媒体官方网站

2026-04-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (Claude Code with Computer use and Channels) and their advanced autonomous capabilities, which clearly involve AI. However, there is no mention or implication of any injury, rights violation, disruption, or other harm caused by these systems. The content focuses on the technical features, interoperability, and strategic positioning of these AI tools, which enriches understanding of the AI ecosystem. Since no harm has occurred or is directly implied as plausible in the near term, and the main narrative is about development and ecosystem context, the classification as Complementary Information is appropriate.
Thumbnail Image

AI首次"核泄漏"事件-钛媒体官方网站

2026-04-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and its accidental source code leak due to a human error in deployment. The leak exposes core AI agent engineering and security mechanisms, which can be exploited maliciously, posing direct risks to cybersecurity and safety. The article describes realized harm in terms of intellectual property loss, security vulnerabilities, and the potential for malicious actors to misuse the leaked code, which constitutes harm to property, communities, and possibly human rights (security and safety). The event is not merely a potential risk but an actual incident with ongoing consequences, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI编程的底牌,原来这么不值钱-钛媒体官方网站

2026-04-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose core source code was accidentally leaked publicly due to a packaging misconfiguration during software release. This leak directly led to the exposure of proprietary engineering details and internal AI system mechanisms, constituting a violation of intellectual property rights, which is a recognized harm under the AI Incident definition (c). Although no physical harm or user data breach occurred, the economic and competitive harm to Anthropic and the AI industry is significant and realized. The AI system's development and release process failure is the direct cause of this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claude Code 被开源事件持续发酵,黑客钓鱼传播窃密软件

2026-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked due to human error during development. This leak was exploited by hackers to spread malware that steals sensitive information, causing harm to users' property and privacy. The AI system's development and the leak directly contributed to the incident. The malware distribution and resulting data theft constitute realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

51万行源码裸奔、虚假"替罪羊"登场:Anthropic这次的事故,比看起来严重得多-钛媒体官方网站

2026-04-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was unintentionally exposed due to a packaging misconfiguration during deployment, which is part of the AI system's development and use lifecycle. The exposure of proprietary source code constitutes a violation of intellectual property rights and presents potential security risks, fulfilling the criteria for harm under (c) violations of intellectual property rights and (e) other significant harms where the AI system's role is pivotal. Although no direct user data breach or physical harm occurred, the incident's impact on competitive advantage, security posture, and public trust is significant and directly linked to the AI system's development and deployment processes. Hence, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic旗下编码助手51万行源代码被泄露 包括大量未发布功能

2026-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Code) whose source code was leaked due to a packaging mistake. The leak is related to the AI system's development and deployment processes. Although the company states no sensitive customer data was leaked, the exposure of the AI system's proprietary code and unreleased features can plausibly lead to intellectual property rights violations and potential misuse, which are harms under the AI harms framework. Since no actual harm has been reported yet, but the risk is credible and plausible, the event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is the leak event itself, not a response or governance action. It is not Unrelated because the event directly involves an AI system and its potential harms.
Thumbnail Image

Anthropic意外泄露50万行源代码 未发布功能与AI路...

2026-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code and unreleased features were accidentally leaked, which is a direct consequence of the AI system's development and deployment process. The leak has already led to external parties reverse engineering the system and raising security concerns, indicating realized harm related to security and intellectual property. The exposure of internal model data and unreleased features also constitutes a breach of obligations related to intellectual property rights. Given these factors, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's development and use.
Thumbnail Image

美国AI巨头51万行源代码泄露!开发者直接"抄作业"?律师提示有风险

2026-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Code) whose source code was accidentally leaked, exposing internal AI system details and unreleased features. This leak is due to a human error in the development and deployment process. The widespread availability of the source code enables others to replicate or modify the AI system, which constitutes a violation of intellectual property rights and commercial secrets, a recognized form of harm under the AI Incident definition (c). Although no direct customer data breach occurred, the leak's role in enabling unauthorized use and potential commercial exploitation of proprietary AI technology is a direct harm linked to the AI system's development and use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

足足51万行!明星AI编程工具Claude Code源码意外泄露

2026-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose complete source code was accidentally leaked, exposing internal architecture and core algorithms. While no direct harm has yet occurred, the leak could plausibly lead to AI incidents such as intellectual property violations, security breaches, or malicious misuse. The incident stems from a development and release process error, not a malicious attack, but the exposure of sensitive AI system details creates a credible risk of future harm. Since no actual harm has been reported yet, this fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

不会让员工背锅 Claude之父回应源码泄露:此为无心之错

2026-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and a leak of its source code, which is a development-related issue. However, there is no indication that this leak has directly or indirectly caused harm as defined by the framework (e.g., injury, rights violations, or operational disruption). The company's response and reflections on process improvements constitute complementary information about managing AI system risks. Therefore, this event is best classified as Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

Claude Code源码泄露后再更新:令人痛恨的超级收费bug改了

2026-04-02
驱动之家
Why's our monitor labelling this an incident or hazard?
Claude Code is an AI programming tool, thus an AI system. The bug in cache handling caused users to be overcharged by up to 10 times, which is a direct financial harm to users. The source code leak revealed the bug, prompting a fix. The event involves the AI system's malfunction leading to realized harm (financial loss) to users. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

对抗A社丧心病狂的封禁:Claude源码泄露后反封号工具来了

2026-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and its source code leak, which is a significant development. The creation of a tool to evade telemetry and account bans relates to the use and potential misuse of the AI system. However, the article does not report any direct or indirect harm such as injury, rights violations, or disruption caused by the AI system's malfunction or use. The focus is on the leak and the community's technical response, which fits the definition of Complementary Information. There is no clear indication that the event has led to an AI Incident or that it plausibly leads to one (AI Hazard).
Thumbnail Image

Claude源码泄露:比OpenAI更Open的一天

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) and a malfunction in the development process (failure to remove a debugging source map file) that led to the exposure of extensive source code. Although no direct harm has yet occurred (no user data or model weights leaked), the leak creates a credible risk of future harm through exploitation of the exposed code and security details. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident. It is not an AI Incident because no actual harm has been reported yet. It is not Complementary Information because the main focus is the leak event itself, not a response or governance action. It is not Unrelated because it clearly involves an AI system and its development.
Thumbnail Image

Claude Code开源发酵:负责人反省,平替版狂飙10万星,Anthropic紧急封杀

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked due to a deployment mistake, which is a malfunction in the development/use of the AI system. The leak has directly led to harm in the form of intellectual property rights violations and harm to the company's property and competitive position. The widespread unauthorized copying and redistribution of the code, including on decentralized platforms, confirms realized harm. Although no physical harm or direct human rights violations are described, the breach of intellectual property rights and harm to property clearly meet the criteria for an AI Incident under category (c) and (d).
Thumbnail Image

逆天,ClaudeCode源代码泄露,几小时就被扒的底朝天

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ClaudeCode) and its source code leak, which is a development-related issue. While the leak is significant and exposes internal details and future plans, there is no indication that this has directly or indirectly caused harm to people, infrastructure, rights, property, or communities. Nor does it describe a plausible future harm scenario stemming from this leak. The main focus is on the leak as a noteworthy development and the community's response, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Claude Code泄露代码隐藏的87个未发布功能 暴露其野心

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) with advanced autonomous and multi-agent features revealed through a source code leak. The leak exposes capabilities that could plausibly lead to harms such as privacy violations, unauthorized autonomous actions, or security risks if activated or misused. However, the article does not report any actual harm or incident resulting from these features being used or malfunctioning. The presence of completed but locked features indicates a credible risk of future incidents once enabled or exploited. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly concerns an AI system and its potential risks.
Thumbnail Image

Anthropic试图挽救泄露源代码,却"误删"数千GitHub仓库

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
Anthropic's Claude Code is an AI system (a large language model tool). The accidental public release of its source code is a development-related event involving the AI system. The leak led to unauthorized dissemination of proprietary AI code, which is a violation of intellectual property rights (harm category c). The subsequent takedown requests mistakenly affected thousands of repositories, causing harm to property and communities (harm categories d and c). These harms have materialized, not just potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

错上加错,Anthropic一刀切掉8100仓库

2026-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked, leading to widespread unauthorized dissemination. The use of DMCA takedown requests caused collateral damage by removing thousands of repositories, disrupting developer activities and potentially violating intellectual property rights. The harm is realized and directly linked to the AI system's development and release process. The incident also includes responses and remediation efforts by Anthropic. Given the direct harm caused by the AI system's mishandling and the resulting disruption and rights violations, this event fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic做了自己的OpenClaw,Computer use正式进入Claude Code

2026-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Anthropic's Claude Code with Computer use and Channels features, which are AI-driven tools for autonomous computer interaction and message handling. However, there is no mention or implication of any realized harm, injury, rights violations, or disruptions caused by these AI systems. The content focuses on the capabilities, design, and ecosystem implications, without describing any incident or plausible immediate harm. Therefore, the event is best classified as Complementary Information, providing context and updates on AI system development and ecosystem dynamics rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Claude Code源码泄露,下一个王牌提前曝光

2026-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and its source code leak due to a development/release process error. The leak does not report direct harm such as injury, rights violations, or operational disruption, but it plausibly threatens Anthropic's intellectual property rights, competitive position, and organizational trust, which are significant harms under the framework. Since the harm is potential and not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete event with plausible future harm, nor is it unrelated as it directly concerns an AI system and its development. Thus, the classification is AI Hazard.
Thumbnail Image

Claude Code开源了!51万行代码,全网狂欢

2026-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system (Claude Code) is explicitly involved, as the leaked source code belongs to a sophisticated AI system. The event stems from the development and deployment phase, specifically a security/configuration error leading to unintended public exposure. No direct or indirect harm to persons, infrastructure, rights, or communities is reported as having occurred yet. However, the leak plausibly could lead to harms such as intellectual property violations, unauthorized use, or malicious exploitation of the AI system's capabilities. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not Complementary Information because the main focus is the leak event itself, not a response or update to a prior incident. It is not Unrelated because the event is clearly AI-related and involves an AI system.
Thumbnail Image

突发!Claude Code"开源",全网疯传

2026-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code client tool) and its development, specifically the accidental leak of source code. While this leak exposes proprietary code and raises copyright concerns, there is no indication that this has directly or indirectly caused harm to health, infrastructure, rights, property, communities, or the environment. The leak itself is a security and intellectual property issue, not an AI Incident causing harm or an AI Hazard posing plausible future harm. The article also discusses the legal and community responses, making it primarily an update and contextual information about the AI ecosystem. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Claude Code代码泄露暴露类似电子宠物功能和常驻智能体

2026-04-02
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked, revealing unreleased AI features and internal mechanisms. Although no direct harm (such as injury, rights violations, or operational disruption) has occurred, the leak plausibly could lead to AI incidents if malicious actors exploit the exposed code to bypass protections or misuse the AI system. The leak itself is a development-related event involving the AI system's code and instructions. Since harm is potential and not realized, and the event centers on the risk posed by the leak, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. The event is not unrelated because it clearly involves an AI system and plausible future harm.
Thumbnail Image

Anthropic打击泄露代码误伤GitHub正当存储库

2026-04-03
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude Code client) whose source code was leaked and spread without authorization, constituting a violation of intellectual property rights. The use of AI tools to create derivative versions further complicates the issue but does not negate the realized harm. The overbroad DMCA takedown caused additional harm by removing legitimate code branches, disrupting normal operations. These harms are directly linked to the AI system's development and use, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete incident involving harm caused by AI system misuse and legal enforcement actions.
Thumbnail Image

Anthropic意外泄露Claude编程工具源代码

2026-04-01
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development process (source code release). The leak is accidental and exposes internal code, which could indirectly harm the company competitively but does not directly or indirectly cause harm to people, infrastructure, rights, property, or communities as defined. There is no indication of realized harm or plausible imminent harm to others from this leak. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about the AI system's development and company response, fitting the definition of Complementary Information.
Thumbnail Image

Anthropic连续泄露敏感信息引发行业关注

2026-04-01
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically Anthropic's AI product Claude Code and its software architecture. The leaks were due to human error in the development and release process, which is part of AI system development and use. Although no direct harm such as injury or rights violations is reported, the exposure of sensitive internal AI system details poses a plausible risk of future harm, including intellectual property violations and potential misuse. Since no actual harm has occurred yet but the risk is credible and material, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic意外泄露Claude Code源代码引发关注

2026-04-01
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and its source code leak due to a packaging error. Although no direct harm to users or others has been reported, the exposure of core AI system components and architecture creates a credible risk of intellectual property violations and targeted attacks, which could lead to significant harm in the future. Since the harm is not realized but plausibly could occur, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete accidental leak with potential consequences, nor is it unrelated as it directly concerns an AI system's sensitive code.
Thumbnail Image

Claude Code用户频繁触及使用限额引发开发者不满

2026-04-01
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose malfunction (token consumption bug) and usage limits have directly caused harm to users by limiting their ability to work effectively. The harm is realized and ongoing, as users report frequent quota exhaustion and disrupted workflows. The AI system's development and use are central to the issue, and the company acknowledges the problem and is investigating. This fits the definition of an AI Incident because the AI system's malfunction and use have directly led to harm to groups of people (developers) relying on it for their work.
Thumbnail Image

Claude源码泄露揭露Anthropic未来功能规划

2026-04-02
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and its development, revealing hidden AI features and capabilities. However, there is no indication that any harm has yet occurred due to this leak or the disclosed features. The controversial 'undercover mode' raises concerns about potential misuse and ethical issues, suggesting plausible future risks. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to incidents involving violations of transparency, trust, or intellectual property rights if these features are exploited or misused. It is not an AI Incident because no direct or indirect harm has materialized yet. It is not Complementary Information because it is not an update or response to a prior incident but a new disclosure. It is not Unrelated because it clearly involves AI systems and their potential impacts.
Thumbnail Image

Anthropic泄露Claude Code源代码引发AI安全担忧

2026-04-02
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose internal source code was leaked due to a packaging error during software update deployment. The leak is directly linked to the AI system's development and use. The exposure of proprietary AI system code constitutes a breach of intellectual property rights and commercial confidentiality, which falls under harm category (c) violations of intellectual property rights and potentially harm to the AI ecosystem. The leak has already occurred and caused significant reputational and competitive harm, which is a clear and articulated harm where the AI system's role is pivotal. Although no physical harm or direct violation of personal data rights occurred, the incident still meets the criteria for an AI Incident because the AI system's development and use led to a significant harm event. It is not merely a potential risk or a complementary update but a realized incident involving AI system security failure and data leakage.
Thumbnail Image

Anthropic意外泄露Claude Code完整源代码

2026-04-01
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its complete source code being accidentally leaked due to a packaging error, which is a direct result of the AI system's development process. The leak constitutes a violation of intellectual property rights, which falls under harm category (c) in the AI Incident definition. Although no personal data or customer credentials were exposed, the unauthorized public dissemination of proprietary AI source code is a significant harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's development and deployment.
Thumbnail Image

Claude Code在过多命令下存在安全规则绕过漏洞

2026-04-02
net.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and details a security flaw in its use that allows attackers to bypass safety restrictions, leading to prompt injection attacks. These attacks can cause the AI to execute dangerous commands like unauthorized network requests (curl), which can harm system security and user data. The vulnerability is a malfunction in the AI system's enforcement of safety rules, directly linked to potential harm. The presence of an internal fix but ongoing exposure in public versions indicates the harm is current and plausible. Therefore, this is an AI Incident due to the realized security risks and direct link to the AI system's malfunction.
Thumbnail Image

Anthropic误删8100个GitHub仓库 下架失误引发争议

2026-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event directly involves an AI system (Claude Code, a large language model) whose source code leak triggered a chain of events leading to the misuse of copyright takedown requests. The takedown caused actual harm by removing access to thousands of repositories, disrupting developers and communities, which fits harm to property and communities. The incident is linked to the AI system's development and use, and the company's response to its source code leak. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"这是人为错误",Claude Code 51万行源码意外泄露,负责人刚刚回复

2026-04-01
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its source code leak due to a deployment error, which is a development-related issue. However, there is no indication that this leak has directly or indirectly caused harm as defined by the framework (injury, rights violations, disruption, or other significant harms). The leak is described as an opportunity for learning and analysis by the community, and the company is taking steps to improve processes. Since no harm or plausible future harm is reported, and the main focus is on the leak event and its implications for AI development and research, the classification as Complementary Information is appropriate.
Thumbnail Image

Anthropic再次遭遇源代码大规模泄露,逾51万行代码"裸奔"

2026-04-01
千龙网
Why's our monitor labelling this an incident or hazard?
The event involves the accidental public exposure of a large amount of proprietary AI source code from Anthropic's Claude Code system, which is an AI system by definition. The leak is due to a packaging error during deployment, thus linked to the AI system's development and use. The harm includes violation of intellectual property rights and damage to the company's competitive advantage and brand trust, which are recognized harms under the framework. Although no direct physical harm or customer data breach occurred, the significant exposure of core AI code and the resulting reputational and economic damage meet the criteria for an AI Incident. The event is not merely a potential risk but a realized harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

AI也玩愚人节?Claude Code源码泄露,国产大模型又该大升级了

2026-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked, directly causing harm to Anthropic's intellectual property rights and potentially impacting the AI industry competition. The leak is a result of a development and deployment error (packaging misconfiguration). The harm is realized (not just potential), as the proprietary code is now publicly accessible and being widely copied, which constitutes a violation of intellectual property rights under the framework. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claude Code"被动开源" AI行业将迎来春天? - CNMO科技

2026-04-01
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development process (source code). The accidental exposure of the source code is a malfunction in the development or deployment process. Although no direct harm or violation has been reported, the exposure plausibly increases risks such as intellectual property theft, security vulnerabilities, or misuse of the AI system in the future. The article emphasizes the systemic engineering and security challenges revealed by this event, indicating a credible potential for future harm. Since no actual harm has occurred yet, it does not qualify as an AI Incident. It is not merely complementary information because the exposure itself is a significant event with potential risk. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Claude推出"电脑操作"功能:可直接控制MacOS应用 - CNMO科技

2026-03-31
notebook.cnmo.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) with new autonomous control capabilities over computer applications, which is clearly AI-related. However, there is no indication that this has led to any realized harm or incident. The article emphasizes safety features and user control to prevent misuse or harm. Since no harm has occurred and the article does not highlight any credible risk of imminent harm, it does not meet the criteria for an AI Incident or AI Hazard. It is not merely general product news because it discusses safety and governance aspects, but these are presented as part of the feature launch and research preview. Therefore, the article is best classified as Complementary Information, providing context on AI system development and governance measures without reporting an incident or hazard.
Thumbnail Image

智通财经APP获悉,人工智能公司Anthropic(ANTHRO)在意外泄露了其AI智能体应用Claude Code的底层指令后,正紧急控制事态。截至周三上午,Anthropic代表已利用版权下架请求,删除了开发者们在编程平台GitHub上分享的逾8000份Claude Code原始指令(即源代码)的副本及改编版本。该公司随后将下架请求范围缩小至仅覆盖96份副本及改编版本,承认最初的下架请求波及的GitHub账户数量超出了预期。该公司一位发言人表示,"部分内部源代码"的泄露并未暴露任何客户信息或数据。此次泄露也未涉及该公司昂贵且强大的AI模型中有价值的内部数学结构(有时称为权重)。"这是人为失误导致的发布打包问题,并非安全漏洞。我们正在推出一系列措施以防止此类事件再次发生。"该发言人称。然而,此次泄露确实暴露了具有商业敏感性的信息,包括Anthropic用于让其AI模型作为编程智能体运作的专有技术、工具及指令。这些技术和工具被称为"套件",因为它们允许用户控制和指挥这些模型。其结果是,Anthropic的竞争对手以及多家初创公司和开发者现在拥有了一条无需逆向工程即可复制Claude Code功能的捷径,而逆向工程此前已是常见做法。据了解,Claude Code信息是在周二该公司更新该AI工具时意外披露的。与大多数专有软件一样,Claude的源代码通常很难进行逆向工程。然而,这次该公司在GitHub上发布了一种文件类型,该文件类型可链接回可供外部下载和解析的源代码。一位X用户发现了这一泄露事件并迅速传播。数小时内,副本开始成倍增加。正在查阅这些源代码的程序员对Anthropic让Claude AI模型作为Claude Code运行的一些技巧印象深刻。其中一项功能要求模型定期回溯任务并整合记忆,该公司将这一过程称为"做梦"。另一项功能似乎在某些情况下指示Claude Code进入"卧底"模式,在向GitHub等平台发布代码时不透露其为AI的身份。在Anthropic要求GitHub移除其专有代码的副本后,另一位程序员利用其他AI工具用其他编程语言重写了Claude Code的功能。该程序员在GitHub上发文称,此举旨在让信息继续可用,同时避免下架风险。这个新版本已在编程平台上走红。网络安全公司Trail of Bits首席执行官丹・圭多称,此次泄露之所以有用,是因为它揭示了隐藏的功能和即将推出的模型,但不太可能被黑客利用。圭多补充说,黑客在此次泄露前已经能够对代码进行逆向工程,而Claude Code经常被重写,意味着此次泄露的代码很快就会过时。上周有报道称,Anthropic最新、也是最强大的AI模型Claude Mythos已通过一次数据泄露被曝光。该新模型的消息公开的同一晚,获得亚马逊和谷歌支持的Anthropic赢得了一项法院命令,阻止特朗普政府禁止政府使用其AI模型的禁令。Anthropic正在寻求美国上诉法院暂缓执行五角大楼将其指定为供应链风险的命令,以待对该案进行司法审查。该公司已起诉美国国防部,原因是国防部终止了与该AI初创公司的合同并将其标记为供应链风险。Anthropic在法庭文件中表示,美国政府将其列入黑名单的行为可能导致该公司2026年收入减少数十亿美元。此外,Anthropic PBC正考虑最快于10月进行首次公开募股(IPO),与竞争对手OpenAI在IPO赛道上展开竞争。

2026-04-02
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its proprietary source code leak, which is a direct result of the AI system's development and deployment process. Although the leak exposes sensitive internal instructions and tools that enable the AI's operation, no personal data breach or direct harm to people, infrastructure, or rights is reported. The leak could plausibly lead to competitive harm and intellectual property violations, which are significant but potential harms. Since no actual harm has occurred yet, and the company is actively responding to control the situation, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is the leak event itself, not a response or broader ecosystem update. It is not Unrelated because it clearly involves an AI system and its development.
Thumbnail Image

Anthropic "王牌"Claude Code源代码泄露 国产AI编程工具迎来机遇

2026-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and a malfunction (packaging error) that led to the source code leak. While this leak has significant consequences for trust, market dynamics, and technical democratization, it does not describe direct or indirect harm to persons, infrastructure, rights, property, or communities as defined for AI Incidents. Nor does it describe a plausible future harm scenario that would qualify as an AI Hazard. Instead, it details the unfolding consequences, industry reactions, and strategic implications of the leak, fitting the definition of Complementary Information that enhances understanding of AI ecosystem developments and responses.
Thumbnail Image

51万行Claude Code源代码泄露,电子宠物和AI助手遭提前曝光,开发者"抄作业"恐面临法律风险

2026-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the accidental exposure of an AI system's source code, which is a direct product of AI development. The leak has already led to widespread dissemination and potential unauthorized use, which constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The involvement of the AI system is explicit, and the harm is realized through the breach of proprietary code and the ensuing legal risks. Although no physical injury or direct user harm is reported, the violation of intellectual property rights and the potential for commercial misuse meet the criteria for an AI Incident. The event is not merely a potential risk (hazard) or a complementary update but a concrete incident with direct consequences.
Thumbnail Image

AI编程的底牌,原来这么不值钱

2026-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was unintentionally exposed due to a packaging mistake. While the leak reveals sensitive engineering details, it does not include model weights or user data, and no direct harm such as injury, rights violations, or operational disruption is described. The exposure could enable competitors to replicate or misuse the technology, potentially leading to intellectual property violations or other harms in the future. Since no actual harm has yet materialized but plausible future harm exists, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is the leak event itself, not a response or update to a prior incident. It is not Unrelated because the event directly involves an AI system and its development/use.
Thumbnail Image

最强AI编程Claude源码泄露:全球抄作业 有开源项目斩获7万星

2026-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude AI programming tool) and its source code leak, which is a development-related event. However, there is no indication that this leak has directly or indirectly caused any harm as defined by the framework (injury, rights violations, infrastructure disruption, or harm to communities). The leak could plausibly lead to competitive or intellectual property issues, but these are not explicitly stated as harms under the framework. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context about AI ecosystem developments and responses (reverse engineering, open-source projects) without describing a new harm or credible future harm.
Thumbnail Image

AI首次"核泄漏"事件

2026-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the accidental public release of a sophisticated AI system's full source code, which is a direct consequence of the AI system's development and operational management. The leak exposes internal security mechanisms and attack vectors, enabling malicious actors to exploit the system or create harmful derivatives. This creates direct and indirect risks of harm to property, communities, and potentially human safety through cyberattacks or misuse. The article describes realized dissemination and ongoing risks, not just potential future harm. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

全网疯传fork!刚刚,Claude Code源代码泄露被开源了_手机网易网

2026-03-31
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development and deployment process. The leak of the source code is a direct consequence of Anthropic's error in packaging and publishing the npm package with source maps included. This leak exposes proprietary AI system details, which constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition (c). The leak has already occurred and the code is widely available, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident due to the breach of intellectual property rights caused by the AI system's development and deployment mishandling.
Thumbnail Image

惹众怒!Anthropic 疯狂删库,他连夜爆改 Claude Code 源码拿下史上最快 10 万星_手机网易网

2026-04-03
m.163.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system component (the Claude Code harness) whose accidental exposure led to unauthorized possession and redistribution of proprietary software, constituting a breach of intellectual property rights. The harm is realized, as Anthropic's competitive advantage and reputation have been damaged, and legal actions (DMCA takedowns) were initiated. The use of AI (OpenAI Codex) to rewrite the code rapidly also relates to the AI system's ecosystem and complicates enforcement. Thus, the event meets the criteria for an AI Incident involving violation of intellectual property rights (harm category c).
Thumbnail Image

Anthropic的Claude Code源码疑遭通过npm泄露 - cnBeta.COM 移动版

2026-03-31
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The leaked source code is directly related to an AI system (Claude Code) developed by Anthropic, which supports large model API engines and multi-agent collaboration. The exposure of this proprietary code constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The event involves the development and distribution phase of the AI system and has already resulted in harm through unauthorized disclosure of proprietary and sensitive information. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

刚刚,Claude Code开源了!51万行代码,全网狂欢_手机网易网

2026-03-31
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and its development, with a large-scale accidental leak of source code. The leak itself is a circumstance that could plausibly lead to AI incidents, such as intellectual property violations, unauthorized use, or security vulnerabilities. However, the article does not report any actual harm or incident resulting from this leak. Hence, it does not meet the criteria for an AI Incident. It is not merely complementary information because the main focus is the leak event itself, not a response or update to a prior incident. It is not unrelated because it clearly involves an AI system and potential risks. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Claude 代码已下架,爆料人身份曝光,他已经连夜重写了一版火速上架_手机网易网

2026-03-31
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Claude and AI-assisted code rewriting) and legal actions (DMCA takedown) related to leaked AI source code. However, no harm as defined by the framework (injury, rights violations, infrastructure disruption, property/community/environmental harm, or other significant harms) is reported or implied. The main focus is on the legal and community dynamics around AI source code leaks and reimplementation, which is a governance and ecosystem context issue. Thus, it fits the definition of Complementary Information, as it updates on responses and challenges in AI system development and control, without describing a new AI Incident or AI Hazard.
Thumbnail Image

Claude Code 泄露的代码里,处处写着:这家公司人品不行_手机网易网

2026-04-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) whose source code was accidentally leaked, exposing sensitive internal details and user data collection mechanisms. The leak has already resulted in widespread unauthorized access and distribution of proprietary code, constituting a violation of intellectual property rights. Additionally, the exposed code reveals capabilities that could harm user privacy and security, such as remote control and data exfiltration features. These harms have materialized or are ongoing, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized incident with direct harm linked to the AI system's development and use.
Thumbnail Image

面试官皱眉:"没用过 Claude Code 也敢来?" 我不屑:"但我能写一个!",他愣了:等等我记一下..._手机网易网

2026-04-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development lifecycle (code release). The leak of the source code is a direct breach of intellectual property rights, which is a form of harm under the AI Incident definition (c). Although no physical harm or operational disruption is reported, the unauthorized public exposure of proprietary AI system code is a clear violation of legal protections and company obligations. The article does not describe potential future harm only, but an actual realized breach. Hence, it qualifies as an AI Incident rather than a hazard or complementary information. It is not unrelated because the event centers on an AI system and its source code leak.
Thumbnail Image

祝贺Claude Code成功越狱,获得永生_手机网易网

2026-03-31
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development, specifically the accidental exposure of its full source code. This exposure is a breach of intellectual property rights and a security lapse. However, the article does not describe any realized harm such as injury, operational disruption, or direct violation of rights caused by the leak. The potential for future harm exists, including misuse of the leaked code or exploitation of vulnerabilities, which fits the definition of an AI Hazard. The detailed technical analysis and reflections on the AI system's design do not constitute Complementary Information because the main focus is the leak event itself and its implications. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

沸腾!Claude Code 被开源了,51 万行代码,全网疯传_手机网易网

2026-04-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and its development process, specifically a source code leak due to a packaging error. The leak constitutes a breach of intellectual property rights, which is a form of harm under the AI Incident definition. However, the article does not mention any direct or indirect harm resulting from this leak, such as misuse of the code causing injury, rights violations, or other harms. The leak is a significant security and proprietary risk but does not itself constitute an AI Incident or AI Hazard as no harm or plausible future harm is described. The event mainly provides contextual information about AI development risks and past similar incidents, fitting the definition of Complementary Information.
Thumbnail Image

Anthropic发送DMCA通知 要求删除超过8100个包含Claude Code源码的GitHub仓库 - cnBeta.COM 移动版

2026-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was leaked and distributed without authorization, constituting a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. The leak and subsequent widespread distribution of the source code have directly led to harm in the form of intellectual property infringement. The company's DMCA takedown notices and GitHub's removal of repositories are responses to this harm. Therefore, this event qualifies as an AI Incident because the development and use of the AI system have directly led to a breach of intellectual property rights.
Thumbnail Image

Claude源码泄露后反封号工具来了 - cnBeta.COM 移动版

2026-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) and its telemetry and anti-abuse mechanisms, which are part of the AI system's use and control. The leak of source code and the creation of a tool to evade telemetry detection could plausibly lead to misuse or abuse of the AI system, potentially causing violations of user rights or other harms. However, the article does not describe any actual harm or incident occurring yet, only the potential for such harm. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their use.
Thumbnail Image

在最严肃的 AI 编程工具里,有一只命中注定的小动物

2026-04-01
maker.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) and an AI-generated companion feature (Buddy), but there is no indication of any harm, malfunction, or risk caused or potentially caused by this system. The article does not describe any injury, rights violation, disruption, or other harm. Instead, it provides insight into the internal design and philosophy behind the AI tool, which enhances understanding of the AI ecosystem. Therefore, this is Complementary Information rather than an Incident or Hazard.
Thumbnail Image

全球AI工具能力上升100倍,是因为Claude掀了自己底裤。。_手机网易网

2026-04-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked, revealing confidential engineering details. Although this leak does not directly cause harm such as injury, rights violations, or property damage, it plausibly could lead to future harms by enabling unauthorized replication, circumvention of security measures, or misuse of the AI tool. The article does not report any realized harm but highlights the potential for a significant upgrade in AI tool capabilities globally due to this leak. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents in the future but has not yet done so.
Thumbnail Image

封不住!Claude Code爆改Python版加冕最快10万星,且clone且珍惜_手机网易网

2026-04-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude Code) and its source code leak, which is a direct result of a development mishap (human error leading to accidental public exposure). The unauthorized use and cloning of the AI system's code constitute a violation of intellectual property rights, a recognized harm under the AI Incident definition. The legal actions taken by Anthropic and the widespread replication of the code confirm that harm has materialized. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's development/use and the violation of intellectual property rights and operational disruption.
Thumbnail Image

Claude Code开发团队回应源代码泄露:纯属人为失误 将改进自动化流程 - cnBeta.COM 移动版

2026-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Although the source code leak involves an AI system's internal code, the event does not report any realized harm such as injury, rights violations, or community harm caused by the AI system's outputs or behavior. The leak is a security incident related to intellectual property but does not constitute an AI Incident because the AI system itself did not cause harm through its operation or malfunction. The event focuses on the human error causing the leak and subsequent copyright enforcement and remediation efforts, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

不会让员工背锅 Claude之父回应源码泄露:此为无心之错 - cnBeta.COM 移动版

2026-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose source code was accidentally leaked due to a manual deployment error, a malfunction in the AI system's development and deployment process. This leak has directly led to violations of intellectual property rights as unauthorized copies appeared on GitHub, causing harm to the company and potentially the AI ecosystem. The company's response and mitigation efforts do not negate the fact that harm has occurred. Hence, this is an AI Incident as per the definitions, since the AI system's malfunction led to a breach of intellectual property rights, a recognized harm category.
Thumbnail Image

源码被开源,Claude Code之父回应了:纯内部开发者手滑_手机网易网

2026-04-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked due to a developer mistake during packaging. The leak led to widespread unauthorized access and redistribution of the code, which is a violation of intellectual property rights, a recognized category of AI harm. Although no direct physical harm or data breach occurred, the unauthorized exposure of proprietary AI system code constitutes a breach of obligations under applicable law protecting intellectual property rights. The incident is directly linked to the AI system's development and deployment processes and has resulted in realized harm (loss of control over proprietary code and potential competitive disadvantage). Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

史诗级乌龙!Anthropic用一个调试文件,把51万行核心代码送给了全世界_手机网易网

2026-04-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Code) whose core source code was accidentally leaked publicly due to a packaging error. This leak directly results from the development and deployment process of the AI system. The leak exposes proprietary intellectual property, which constitutes a violation of intellectual property rights, a recognized form of AI harm under the framework. Although no physical harm or data breach occurred, the direct exposure of the AI system's core code and internal security mechanisms is a significant harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claude Code开源发酵:负责人反省,平替版狂飙10万星,Anthropic紧急封杀_手机网易网

2026-04-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Code) whose source code was accidentally leaked due to a team error in deployment. This leak led to widespread unauthorized distribution and replication of the AI system's code, violating Anthropic's intellectual property rights. The harm is realized and significant, including loss of proprietary technology and competitive advantage. The AI system's development and deployment process directly caused this harm. Thus, it meets the criteria for an AI Incident involving violation of intellectual property rights. The event is not merely a potential hazard or complementary information, but a concrete incident with direct harm.