Critical RCE Flaw in Ollama AI Platform

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Security researchers at Wiz disclosed a critical remote code execution vulnerability (CVE-2024-37032, “Probllama”) in Ollama, an open-source AI model deployment platform. The flaw allowed crafted HTTP requests to hijack Docker instances, servers, and hosted models. Ollama released a patch in v0.1.34, but over 1,000 internet-exposed instances remain unpatched.[AI generated]

Why's our monitor labelling this an incident or hazard?

Ollama is an AI system for running LLMs, and the vulnerability allows remote code execution via its API, which is a direct malfunction of the AI system's software. The exploit can lead to system hijacking, compromising the environment hosting the AI system, which constitutes harm to property and potentially critical infrastructure. The article reports that over 1,000 vulnerable instances remain exposed, indicating ongoing risk and realized harm potential. The involvement of the AI system's development and use in this security flaw and its exploitation meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securitySafetyPrivacy & data governanceAccountabilityRespect of human rights

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Patch now: 'Easy-to-exploit' RCE in open source Ollama

2024-06-24
TheRegister.com
Why's our monitor labelling this an incident or hazard?
Ollama is an AI system for running LLMs, and the vulnerability allows remote code execution via its API, which is a direct malfunction of the AI system's software. The exploit can lead to system hijacking, compromising the environment hosting the AI system, which constitutes harm to property and potentially critical infrastructure. The article reports that over 1,000 vulnerable instances remain exposed, indicating ongoing risk and realized harm potential. The involvement of the AI system's development and use in this security flaw and its exploitation meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ollama addresses remote execution flaw following Wiz discovery - SiliconANGLE

2024-06-24
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a remote code execution vulnerability in Ollama, an AI model deployment infrastructure, which is an AI system supporting AI models. The vulnerability could plausibly lead to harm by allowing attackers to compromise servers and AI applications, thus fitting the definition of an AI Hazard. No actual harm or incident is reported, only the discovery and patching of the vulnerability. Therefore, this event is best classified as an AI Hazard because it describes a credible risk of harm from the AI system's use and deployment environment that was mitigated before causing an incident.
Thumbnail Image

Ollama Critical Flaw Affects 1,000 Vulnerable Instances - TechNadu

2024-06-25
TechNadu
Why's our monitor labelling this an incident or hazard?
The Ollama server is an AI system enabling inference with large language models. The disclosed vulnerability directly relates to the AI system's use and malfunction, enabling attackers to execute arbitrary code remotely, compromising the hosting environment. This constitutes harm to property and potentially critical infrastructure if exploited. The event reports realized vulnerability and risk, not just potential, and the AI system's role is pivotal in enabling the attack vector. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and the harm potential.
Thumbnail Image

Ollama AI Platform Flaw Let Attackers Execute Remote Code

2024-06-26
GBHackers On Security
Why's our monitor labelling this an incident or hazard?
The Ollama platform is an AI system used for AI model deployment. The described vulnerability (CVE-2024-37032) enables attackers to execute remote code, which can lead to unauthorized access and manipulation of AI models and data. This constitutes a direct harm to property and potentially to the AI ecosystem's integrity. The event involves the use and malfunction (security flaw) of an AI system leading to harm, fitting the definition of an AI Incident. Although the vulnerability has been mitigated, the presence of many unpatched instances means the harm is ongoing or imminent, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New Ollama RCE vulnerability immediately fixed

2024-06-25
SC Media
Why's our monitor labelling this an incident or hazard?
The vulnerability involves an AI system (Ollama platform) and its exploitation could lead to significant harm including unauthorized control over AI models and servers, which fits the definition of an AI Hazard since the harm is plausible but not reported as having occurred. The article focuses on the vulnerability and its fix, with no indication that harm has yet materialized, so it is classified as an AI Hazard rather than an AI Incident.