Widespread Vulnerabilities in AI MCP Servers Expose Organizations to Security Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers found that over 7,000 Model Context Protocol (MCP) servers, which connect AI models to external data, are misconfigured and publicly accessible. These vulnerabilities, including the 'NeighborJack' flaw, could allow attackers to hijack host machines or tamper with AI data, posing significant security and data integrity risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly, specifically MCP servers that connect AI models to external data. The misconfigurations and vulnerabilities in these AI-related servers have directly led to security compromises or the potential for such compromises, including unauthorized code execution and data manipulation. These outcomes constitute harm to property and potentially to communities relying on the AI outputs, fitting the definition of an AI Incident. The researchers' findings indicate that these issues are already present and exploitable, not merely potential future risks, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafety

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information securityMonitoring and quality control

AI system task:
Content generationReasoning with knowledge structures/planningInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Hundreds of MCP Servers at Risk of RCE and Data Leaks

2025-06-26
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-related MCP servers that enable AI applications to access external data. The identified vulnerabilities (e.g., NeighborJack) and misconfigurations could plausibly lead to remote code execution and data breaches, which are harms covered under the AI Incident definition if realized. However, since no actual malicious exploitation or harm has been reported, the event is best classified as an AI Hazard. The presence of a security research analysis and recommendations for mitigation further support this classification as a potential risk rather than a realized incident.
Thumbnail Image

Misconfigured AI Servers and Weak Configurations Expose Data, Systems

2025-06-27
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically MCP servers that connect AI models to external data. The misconfigurations and vulnerabilities in these AI-related servers have directly led to security compromises or the potential for such compromises, including unauthorized code execution and data manipulation. These outcomes constitute harm to property and potentially to communities relying on the AI outputs, fitting the definition of an AI Incident. The researchers' findings indicate that these issues are already present and exploitable, not merely potential future risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

MCP servers used by developers and 'vibe coders' are riddled with vulnerabilities - here's what you need to know

2025-06-26
channelpro
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies MCP servers as AI-related systems (extensions of LLMs) that are widely used by developers and AI agents ('vibe coders'). The vulnerabilities allow malicious actors to execute arbitrary commands and take control of host machines, which is a direct harm to property and organizational security. The misuse of these AI systems has already occurred or is ongoing, as hundreds of servers are affected. The potential for prompt injection and context poisoning further indicates direct harm to AI system integrity and outputs. Thus, the event meets the criteria for an AI Incident because the AI system's use and misconfiguration have directly led to significant harms.
Thumbnail Image

Misconfigured MCP servers prevalent, analysis shows

2025-06-27
SC Media
Why's our monitor labelling this an incident or hazard?
The MCP servers are AI-related infrastructure enabling AI systems to access external data, which is critical for their operation. The vulnerabilities described could plausibly lead to AI incidents such as data tampering (context poisoning) or host hijacking, which would harm the integrity and security of AI systems and potentially cause broader harms. Since no actual harm has occurred yet but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The report's focus is on the potential for harm due to AI system vulnerabilities, not on realized harm or ongoing incidents.