Prompt Injection Vulnerabilities in Anthropic's Git MCP Server Enable Code Execution and File Access

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers discovered three prompt injection vulnerabilities in Anthropic's official Git MCP server, allowing attackers to manipulate AI assistants into executing code, accessing, or deleting files without direct system access. The flaws, affecting all versions before December 2025, posed significant security risks but have since been fixed.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's MCP server integrating large language models with system tools) whose malfunction and improper input validation allow attackers to exploit prompt injection vulnerabilities. These vulnerabilities enable attackers to execute code, delete files, and expose sensitive data indirectly through the AI system's context processing. The harm includes security and privacy risks, which fall under harm to property and communities. The vulnerabilities have been confirmed and fixed, indicating the harm is realized or imminent. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securitySafetyPrivacy & data governanceAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessConsumers

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Prompt Injection Bugs Found in Official Anthropic Git MCP Server

2026-01-20
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's MCP server integrating large language models with system tools) whose malfunction and improper input validation allow attackers to exploit prompt injection vulnerabilities. These vulnerabilities enable attackers to execute code, delete files, and expose sensitive data indirectly through the AI system's context processing. The harm includes security and privacy risks, which fall under harm to property and communities. The vulnerabilities have been confirmed and fixed, indicating the harm is realized or imminent. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic quietly fixed flaws in its Git MCP server

2026-01-20
theregister.com
Why's our monitor labelling this an incident or hazard?
The Git MCP server is an AI system component that connects AI tools to Git repositories, enabling natural language interactions. The vulnerabilities allow attackers to chain MCP servers and exploit prompt injection to execute malicious code remotely, which is a direct security hazard. However, the article states there is no indication of exploitation in the wild and that the issues have been fixed. Therefore, while the vulnerabilities posed a credible risk of harm (AI Hazard), the fixing of these bugs before exploitation means no realized harm occurred. The article focuses on the security risk and the mitigation, not on an incident of harm. Hence, this is best classified as an AI Hazard reflecting a plausible future harm that was averted.
Thumbnail Image

Anthropic's official Git MCP server hit by chained flaws that enable file access and code execution

2026-01-20
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's MCP server designed to interface with large language models) whose vulnerabilities allow attackers to manipulate AI inputs (prompt injection) to execute arbitrary code and access files without authorization. This directly leads to harm in terms of security breaches and potential damage to property or systems. The involvement of AI in the decision loop and the exploitation via prompt injection confirms the AI system's role in the incident. Since the vulnerabilities have been exploited or pose a direct risk of exploitation causing harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Three Flaws in Anthropic MCP Git Server Enable File Access and Code Execution - IT Security News

2026-01-20
IT Security News
Why's our monitor labelling this an incident or hazard?
The mcp-server-git is an AI system component related to AI assistants, and the vulnerabilities allow attackers to exploit prompt injection to perform unauthorized actions like file access and code execution. This constitutes a direct harm linked to the AI system's malfunction or misuse, fitting the definition of an AI Incident due to the realized security risks and potential damage.
Thumbnail Image

Microsoft & Anthropic MCP Servers At Risk of RCE, Cloud Takeovers

2026-01-20
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely MCP servers that interface with large language models and autonomous AI agents. The vulnerabilities directly enable remote code execution and unauthorized access to cloud resources, which constitute harm to property and potentially critical infrastructure. The exploitation of these vulnerabilities has already been demonstrated, indicating realized harm or at least active exploitation potential. Therefore, this qualifies as an AI Incident because the development and use of these AI-related MCP servers have directly led to significant cybersecurity harms.
Thumbnail Image

Anthropic Patches Prompt Injection Flaws in AI Git Server

2026-01-20
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude AI agent using the Model Context Protocol to interact with Git repositories) and details how prompt injection vulnerabilities in the AI's tool integration led to unauthorized actions such as file access, deletion, and remote code execution. These actions constitute harm to property and data security, fulfilling the criteria for an AI Incident. The vulnerabilities were exploited in practice during red-team exercises, confirming realized harm or at least direct risk of harm. The event also discusses the development, use, and malfunction of the AI system as contributing factors. Anthropic's patching of the vulnerabilities is a response but does not negate the incident classification since harm or risk of harm was present. Hence, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Multiple 0-day Vulnerabilities in Anthropic Git MCP Server Enables Code Execution

2026-01-21
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's MCP server integrating LLMs with Git operations) whose vulnerabilities allow attackers to execute code and exfiltrate data, causing direct harm. The attack exploits the AI system's use and malfunction (insufficient input validation and argument sanitization) leading to realized harms such as unauthorized code execution and data breaches. The description confirms that these vulnerabilities affect default configurations and pose immediate risks, indicating actual harm or imminent exploitation rather than hypothetical risk. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly leads to significant harm.
Thumbnail Image

Multiple 0-day Vulnerabilities in Anthropic Git MCP Server Enables Code Execution - IT Security News

2026-01-21
IT Security News
Why's our monitor labelling this an incident or hazard?
The Anthropic Git MCP Server is an AI-related system as it involves the Model Context Protocol, which is used in AI model integration. The vulnerabilities allow attackers to execute arbitrary code and exfiltrate sensitive data, which constitutes harm to property and potentially to data privacy. Since the vulnerabilities have been exploited or are exploitable leading to direct harm, this qualifies as an AI Incident due to the AI system's malfunction (security flaws) leading to harm.
Thumbnail Image

Anthropic's official Git MCP server had some worrying security flaws - this is what happened next

2026-01-21
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Anthropic's Claude and its associated MCP servers) and security flaws that could be exploited to cause remote code execution, which is a serious security incident potentially leading to harm to organizations (harm to property, disruption to operations). The prior cyber espionage campaign involving manipulation of Claude to conduct attacks constitutes an AI Incident due to realized harm. The newly discovered vulnerabilities, while not exploited yet, represent a plausible risk but since the article also references actual harm from prior misuse, the overall classification prioritizes AI Incident. The patching of the vulnerabilities and the security research are complementary details but do not negate the presence of an incident. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Anthropic patches critical vulnerabilities in Git MCP server

2026-01-21
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (agentic AI tools like Copilot and Cursor using the Git MCP server) and describes critical vulnerabilities that could be exploited to cause harm through remote code execution or file overwrites. While the vulnerabilities have been discovered and patched, the report does not indicate that any actual harm has occurred yet. The potential for harm is credible and significant, fitting the definition of an AI Hazard. The event is not a Complementary Information update about a past incident, nor is it unrelated, as it directly concerns AI system security risks.
Thumbnail Image

Anthropic, Microsoft MCP Server Flaws Shine a Light on AI Security Risks

2026-01-23
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (MCP servers) and details how their vulnerabilities can be exploited to cause significant security breaches, including remote code execution and unauthorized access to sensitive data. These are direct harms resulting from the malfunction or insecure design of AI systems. The involvement of AI in these incidents is clear and central, and the harms are materialized or highly plausible given the vulnerabilities disclosed and the potential for exploitation. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.