Critical OpenClaw AI Vulnerability Allows Malicious Websites to Hijack Local AI Agents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A critical vulnerability in the OpenClaw AI agent framework, dubbed ClawJacked, allowed malicious websites to hijack locally running AI agents via WebSocket connections. Exploited in the wild, this flaw enabled attackers to gain unauthorized control, access sensitive data, and distribute malware, impacting developers and enterprises globally. The issue has since been patched.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (OpenClaw AI agents) whose design flaw and exploitation have directly led to harm in enterprise environments, including unauthorized access and control over AI agents, which can trigger actions across SaaS, cloud, and internal tools. This constitutes a violation of security and potentially human rights or organizational integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as malware campaigns exploiting this flaw have been documented. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident


Articles about this incident or hazard

Thumbnail Image

ClawJack Allows Malicous Sites to Control Local OpenClaw AI Agents - IT Security News

2026-03-01
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) and a security vulnerability (CVE-2026-25253) that could be exploited by malicious websites to control local AI agents without user knowledge. This represents a plausible pathway to harm through unauthorized control and misuse of the AI system, which could lead to breaches of user rights, privacy violations, or other significant harms. Since the vulnerability could have been exploited but no actual incident of harm is described, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

ClawJacked: New OpenClaw Flaw Lets Malicious Websites Hijack Local AI Agents - Cyberwarzone

2026-03-02
Cyberwarzone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agents) whose design flaw and exploitation have directly led to harm in enterprise environments, including unauthorized access and control over AI agents, which can trigger actions across SaaS, cloud, and internal tools. This constitutes a violation of security and potentially human rights or organizational integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as malware campaigns exploiting this flaw have been documented. Therefore, this is classified as an AI Incident.
Thumbnail Image

OpenClaw 0-Click Vulnerability Allows Malicious Websites to Hijack Developer AI Agents

2026-03-01
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agent) whose malfunction (security vulnerability) directly leads to harm by allowing attackers to hijack the AI agent and gain unauthorized access to developer systems and data. The harm includes potential breaches of privacy, unauthorized execution of commands, and data theft, which are significant harms under the framework. The involvement of the AI system is explicit, and the harm is realized, not just potential. Hence, this is classified as an AI Incident.
Thumbnail Image

ClawJacked Flaw Lets Malicious Sites Hijack Local OpenClaw AI Agents via WebSocket - geekfence.com

2026-03-01
GeekFence - Tech Insights That Matter
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (OpenClaw AI agents) and describes how their vulnerabilities have been exploited or could be exploited to cause harm, including unauthorized control, data theft, manipulation of AI behavior, and malware distribution. These harms fall under violations of security and privacy rights and harm to property and communities through scams and malware. The direct exploitation of AI system vulnerabilities leading to these harms meets the definition of an AI Incident. The detailed description of actual attacks and malware campaigns confirms that harm has occurred, not just potential harm, distinguishing this from an AI Hazard or Complementary Information.
Thumbnail Image

ClawJacked Flaw Allows Sites to Hijack Local OpenClaw AI via WebSocket

2026-03-01
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (OpenClaw) that has directly led to security breaches allowing attackers to hijack AI agents and access sensitive information. The attack exploits a missing rate-limiting mechanism, enabling brute-force password attacks via AI system interfaces. The harms are realized, including unauthorized control over AI agents and malware distribution through AI skill marketplaces, which constitute violations of rights and harm to property or communities. The article also mentions multiple vulnerabilities and patches, indicating ongoing issues with AI system security. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

OpenClaw Patch Prevents Malicious Websites From Hijacking AI Agents | PYMNTS.com

2026-03-02
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw AI agents) and a high-severity vulnerability that could be exploited by malicious websites to hijack these AI agents, which could lead to significant harm such as unauthorized access to systems and credentials. Since no actual harm or incident is reported, but the vulnerability could plausibly lead to an AI Incident if exploited, this qualifies as an AI Hazard. The article also includes recommendations for mitigation and governance, but the primary focus is on the vulnerability and its potential risk rather than a response to a past incident, so it is not Complementary Information.
Thumbnail Image

Critical OpenClaw Vulnerability Exposes AI Agent Risks

2026-03-02
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw AI agent) whose malfunction (security vulnerability) directly enabled malicious actors to hijack the AI agent and gain control over developer devices. This constitutes a direct link to harm (security breach, theft of credentials, unauthorized control), which fits the definition of an AI Incident. The rapid patching and recommendations for layered security controls further confirm the severity and realized nature of the harm. The event is not merely a potential risk (hazard) or a general update (complementary information), but a concrete incident involving AI system malfunction leading to security harm.
Thumbnail Image

ClawJacked Bug Enables Covert AI Agent Hijacking

2026-03-02
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The OpenClaw platform is an AI system managing AI agents and connected devices. The described vulnerability enables attackers to hijack these AI agents, leading to unauthorized control and data access, which is a violation of user rights and privacy (a breach of obligations under applicable law protecting fundamental rights). The harm is realized as attackers can interact with the AI agent and connected nodes, potentially causing significant harm to users. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's malfunction and exploitation.
Thumbnail Image

Latest OpenClaw Flaw Can Let Malicious Websites Hijack Local AI Agents

2026-03-02
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The OpenClaw AI assistant is an AI system with autonomous capabilities and broad access to local systems. The described vulnerability (ClawJacked) enables attackers to exploit the AI system's trust assumptions and gain unauthorized control, leading to direct harm such as data breaches and unauthorized system actions. The harm is realized, not just potential, as malicious control over the AI agent can disrupt security and privacy. The event involves the AI system's use and malfunction leading to harm, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ClawJacked flaw exposed OpenClaw users to data theft

2026-03-02
Security Affairs
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) that runs autonomous AI agents locally and connects large language models to system resources. The vulnerability allowed attackers to exploit the AI system's local WebSocket gateway to gain admin-level control and steal data, constituting a direct harm to users' data and privacy (harm to persons/groups). This fits the definition of an AI Incident because the AI system's use and design directly led to realized harm (data theft and potential system compromise). The prompt patching and recommendations are complementary information but do not negate the incident classification.
Thumbnail Image

'A human-chosen password doesn't stand a chance': OpenClaw has yet another major security flaw -- here's what we know about "ClawJacked"

2026-03-03
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agent) whose use and security flaw directly led to a significant harm scenario: unauthorized control and data theft from users' computers. The AI system's malfunction (security vulnerability) was exploited, causing a breach of user security and privacy, which constitutes harm to individuals. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to realized harm, and the event is not merely a potential risk or a complementary update but a concrete incident with actual exploitation and harm.
Thumbnail Image

Why 42,000 OpenClaw Deployments Are Running Without Security Hardening

2026-03-03
firmenpresse.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw AI agent platform) whose rapid adoption has outpaced security measures, resulting in thousands of deployments vulnerable to attacks. The presence of malicious plugins actively used in malware campaigns causing theft of credentials and cryptocurrency wallets indicates realized harm to property and users. The AI system's design and deployment practices are directly linked to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.