Open WebUI AI Interface Vulnerability Enables Account Takeover and Remote Code Execution

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A high-severity vulnerability (CVE-2025-64496) in Open WebUI, an open-source AI interface, allowed attackers to exploit the Direct Connections feature to execute arbitrary code and take over user accounts. The flaw enabled theft of authentication tokens and, in some cases, remote code execution on backend servers, risking data and system compromise.[AI generated]

Why's our monitor labelling this an incident or hazard?

The vulnerability directly involves an AI system (Open WebUI) and its use/malfunction, leading to account takeover and remote code execution, which are clear harms to property and system security. The attack exploits AI system features and permissions, causing direct harm through unauthorized access and potential system compromise. The presence of realized harm risks and the detailed description of the exploit and its consequences meet the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governanceSafetyAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Open WebUI account takeover flaw could lead to remote code execution

2026-01-06
SC Media
Why's our monitor labelling this an incident or hazard?
The vulnerability directly involves an AI system (Open WebUI) and its use/malfunction, leading to account takeover and remote code execution, which are clear harms to property and system security. The attack exploits AI system features and permissions, causing direct harm through unauthorized access and potential system compromise. The presence of realized harm risks and the detailed description of the exploit and its consequences meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Open WebUI bug turns the 'free model' into an enterprise backdoor - geekfence.com

2026-01-06
GeekFence - Tech Insights That Matter
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Open WebUI) and a security flaw in its use that can lead to serious harm including unauthorized access and remote code execution, which can compromise data and infrastructure. These harms fall under (c) violations of rights and (d) harm to property or communities through unauthorized access and potential data breaches. Since the vulnerability has been exploited or is exploitable leading to these harms, this qualifies as an AI Incident. The description details the direct link between the AI system's malfunction (security flaw) and the harm potential, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

High-Severity Flaw in Open WebUI Affects AI Connections

2026-01-06
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Open WebUI) that interfaces with AI model servers, explicitly described as OpenAI-compatible, indicating AI system involvement. The vulnerability arises from the use of the AI system's feature (Direct Connections) and leads to direct harm: account takeover and exposure of sensitive user data, which constitutes harm to property and potentially to user privacy and rights. The harm has already occurred or was imminent before the patch, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance update but a concrete security incident involving AI system misuse and malfunction.
Thumbnail Image

This WebUI vulnerability allows remote code execution - here's how to stay safe

2026-01-06
TechRadar
Why's our monitor labelling this an incident or hazard?
The vulnerability directly involves an AI system (Open WebUI for AI language models) and its misuse or malfunction leads to significant security harm, including remote code execution and account takeover. This fits the definition of an AI Incident because the AI system's malfunction has directly led to a security breach risk that can harm users and their data (harm to property and potentially to persons). Although the patch mitigates the issue, the vulnerability was active and exploitable, thus constituting an incident rather than a mere hazard or complementary information. The article also advises on mitigation, but the primary focus is on the vulnerability and its consequences, not just on the response.
Thumbnail Image

CVE-2025-64496: Open WebUI Vulnerability Enables Remote Code Execution

2026-01-06
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Open WebUI) that interfaces with large language models and enables connections to external AI models. The vulnerability arises from the AI system's use and design, specifically in handling inputs from connected AI models, leading to remote code execution and account hijacking. These outcomes constitute direct harm to property and enterprise networks, fulfilling the criteria for an AI Incident. The description confirms realized harm or active exploitation potential, not just theoretical risk, and the AI system's malfunction or misuse is pivotal to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

High Severity Flaw In Open WebUI Can Leak User Conversations and Data - IT Security News

2026-01-07
IT Security News
Why's our monitor labelling this an incident or hazard?
The vulnerability involves an AI system (Open WebUI) that interfaces with AI model servers and manages AI workflows. The exploitation of this flaw directly leads to unauthorized access and data leakage, which is a violation of user rights and harms the users by exposing sensitive information. Therefore, this event qualifies as an AI Incident because the AI system's use and its security flaw have directly led to harm through data breaches and privacy violations.