OpenClaw AI Agent Faces Security Vulnerabilities and Corporate Bans

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenClaw, an open-source AI agent platform, has faced multiple security vulnerabilities, including authentication bypass and log poisoning, raising concerns about unauthorized access and malicious content injection. These risks have led major tech companies, including Meta, to ban its use over fears of privacy breaches and unpredictable AI behavior.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (OpenClaw) that autonomously executes commands and integrates with multiple services, fitting the definition of an AI system. The vulnerabilities and malicious use described have already caused realized harms such as credential theft, malware infection, and potential deep network compromises, which qualify as harm to property, communities, and violations of rights. The presence of critical vulnerabilities exploited by attackers and the spread of malicious skills demonstrate direct and indirect causation of harm. The discussion of regulatory violations further supports the classification as an AI Incident. The detailed description of realized harms and security incidents excludes classification as a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Human or fundamental rights

Severity
AI incident


Articles about this incident or hazard

Thumbnail Image

Meta and Other Tech Companies Ban OpenClaw Over Cybersecurity Concerns

2026-02-17
Wired
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous actions on user computers. The bans and warnings from companies stem from concerns that its use could lead to privacy breaches and unauthorized access to sensitive data, which are harms to property and potentially to communities. Since no actual harm has occurred yet but the risk is credible and recognized by multiple organizations, this situation constitutes an AI Hazard. The article focuses on the potential risks and the proactive measures companies are taking, rather than reporting a realized incident or harm.
Thumbnail Image

Key OpenClaw risks, Clawdbot, Moltbot

2026-02-16
Kaspersky Lab
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously executes commands and integrates with multiple services, fitting the definition of an AI system. The vulnerabilities and malicious use described have already caused realized harms such as credential theft, malware infection, and potential deep network compromises, which qualify as harm to property, communities, and violations of rights. The presence of critical vulnerabilities exploited by attackers and the spread of malicious skills demonstrate direct and indirect causation of harm. The discussion of regulatory violations further supports the classification as an AI Incident. The detailed description of realized harms and security incidents excludes classification as a hazard or complementary information.
Thumbnail Image

15 OpenClaw Vulnerabilities Found and Fixed

2026-02-17
TechNadu
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that enables AI agents to interact with chat platforms and execute commands within enterprise environments. The vulnerabilities discovered relate to authentication bypass and access control failures, which are part of the AI system's use and security design. These flaws could directly lead to harm by enabling unauthorized access and control over enterprise systems, constituting harm to property and potentially to communities relying on these systems. Since the vulnerabilities have been exploited or could be exploited, this qualifies as an AI Incident. The report focuses on realized security flaws and their implications rather than potential future risks alone, so it is not merely a hazard or complementary information.
Thumbnail Image

Meta bans OpenClaw over security fears, but insiders say the real risk is far worse | Attack of the Fanboy

2026-02-18
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities that can control computers and perform complex tasks. The article details multiple companies banning it due to fears of unpredictable and potentially harmful behavior, including data breaches and privacy violations. No actual incidents of harm are reported, but the credible risk of such harm is emphasized by security teams and executives. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred. The article focuses on the potential risks and mitigation efforts rather than reporting a realized AI Incident.
Thumbnail Image

Authentication Under Fire: Why OpenClaw Needs ZTNA and AI>Secure Protection

2026-02-17
Security Boulevard
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities that currently uses a token-based authentication model with known security weaknesses. The article outlines how these weaknesses could plausibly lead to unauthorized access and security breaches, which would constitute harm. However, no actual breach or harm is reported. The discussion centers on potential risks and architectural improvements to prevent misuse, fitting the definition of an AI Hazard. It is not Complementary Information because the article is not updating or responding to a past incident but rather analyzing current risks and proposing security integration. It is not Unrelated because it clearly involves an AI system and its security implications.
Thumbnail Image

SecureClaw: Dual stack open-source security plugin and skill for OpenClaw - IT Security News

2026-02-18
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI agents) and addresses security concerns related to their use. However, the article discusses a security tool designed to prevent or detect risky behavior rather than reporting an actual incident or harm caused by AI. There is no mention of realized harm or direct threats, only the potential for risk that the tool aims to manage. Therefore, this is best classified as Complementary Information, as it provides context and a governance/technical response to AI-related security risks without describing a specific AI Incident or Hazard.
Thumbnail Image

Critical "Log Poisoning" Vulnerability in OpenClaw AI Agent Allows Malicious Content Injection

2026-02-17
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI assistant) and a security flaw in how it processes logs that can be poisoned with malicious content. This flaw could plausibly lead to harm by manipulating the AI's reasoning and decisions, which fits the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article focuses on the vulnerability and mitigation advice rather than reporting a realized harm or incident, so it is not Complementary Information. It is clearly related to AI systems and their security, so it is not Unrelated.
Thumbnail Image

Meta and Other Tech Companies Ban OpenClaw Over Cybersecurity Concerns

2026-02-17
DNYUZ
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous operation on user devices, performing complex tasks with limited direction. The article highlights multiple companies banning or restricting its use due to concerns about unpredictable behavior and potential security breaches, including access to sensitive data and cloud services. While no actual harm has been reported, the credible risk of privacy breaches and cybersecurity incidents is clear. The event involves the use of an AI system and the plausible future harm it could cause, fitting the definition of an AI Hazard. It is not an AI Incident because no realized harm has occurred yet, and it is not Complementary Information or Unrelated because the focus is on the risk posed by the AI system's use and the resulting corporate responses.
Thumbnail Image

How to Use the Agent-to-Human Communication (A2H) Protocol with OpenClaw

2026-02-20
Twilio
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose autonomous use has directly led to user harm or risk thereof, such as unwanted message replies and potential security risks that could 'ruin your life.' The lack of safeguards and hallucinations indicate malfunction or misuse risks. Although no specific physical injury or legal violation is detailed, the described harms to users' communication privacy, security, and control qualify as harm to persons or communities. Therefore, this constitutes an AI Incident due to realized harms from the AI system's use and malfunction.
Thumbnail Image

OpenClaw AI is going viral. Don't install it

2026-02-20
PCWorld
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities and system-level permissions, fitting the definition of an AI system. The article details how its use and vulnerabilities have already caused or could cause harm, such as data deletion and security breaches, which qualify as harm to property and environment. The risks are not hypothetical but are presented as real and severe, indicating that harm has occurred or is highly likely. Hence, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

This viral AI tool is the future. Don't install it yet

2026-02-19
PCWorld
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous decision-making capabilities and extensive system access. The article does not report any realized harm or incident caused by OpenClaw but emphasizes the significant potential for harm, including data destruction and security breaches, due to its capabilities and vulnerabilities. Therefore, the event describes a credible risk of harm that could plausibly lead to an AI Incident if the system is misused or compromised. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Researchers Reveal Six New OpenClaw Vulnerabilities

2026-02-19
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw, an agentic AI assistant) whose development and use have led to security vulnerabilities that could be exploited to cause harm, such as unauthorized access to corporate systems and data theft. The presence of public exploit code and active targeting by threat actors indicates realized or ongoing harm or risk of harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction and security flaws have directly or indirectly led to harm or significant risk to property and corporate environments.
Thumbnail Image

OpenClaw AI creates shadow IT risks for banks

2026-02-19
American Banker
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an autonomous AI system operating with the same privileges as its human user, performing tasks that involve sensitive corporate data. The article details critical vulnerabilities and malicious plugins that have allowed attackers to take over user machines and potentially exfiltrate sensitive data. The unauthorized use of OpenClaw by employees in banks and other sectors has already led to exposure of sensitive data and increased insider threat risks. These harms are direct consequences of the AI system's use and vulnerabilities, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article also discusses mitigation efforts but these do not eliminate the existing harms.
Thumbnail Image

Netzilo AI Edge Delivers Enterprise-Grade Visibility, Sandboxing, and Governance for OpenClaw Agents

2026-02-20
AiThority
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw autonomous AI agents) and their security management, but it does not describe any realized harm or a specific incident caused by AI malfunction or misuse. Nor does it describe a credible imminent risk or hazard event. The article is primarily about a new product offering that enhances security and governance for AI agents, which is a complementary development in the AI ecosystem. Therefore, it fits the definition of Complementary Information, as it provides context and response measures related to AI risks without reporting an incident or hazard.
Thumbnail Image

Netzilo AI Edge Delivers Enterprise-Grade Visibility, Sandboxing, and Governance for OpenClaw Agents | Weekly Voice

2026-02-19
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article primarily discusses a new AI security product and its capabilities to manage risks related to autonomous AI agents. It does not report any actual harm, malfunction, or misuse of AI systems. The content is about a governance and security solution to prevent or manage potential risks, which fits the definition of Complementary Information as it provides context and response to AI ecosystem challenges without describing a specific AI Incident or AI Hazard.
Thumbnail Image

OneClaw: Discovery and Observability for the Agentic Era

2026-02-18
SentinelOne
Why's our monitor labelling this an incident or hazard?
The article centers on a security tool that enhances observability and governance of autonomous AI agents to prevent or manage risks. It does not describe any realized harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. Nor does it describe a specific event where harm was narrowly avoided. Instead, it discusses the potential risks inherent in widespread autonomous agent deployment and the need for oversight. This fits the definition of Complementary Information, as it provides context and a governance response to emerging AI risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Netzilo AI Edge Delivers Enterprise-Grade Visibility, Sandboxing, and Governance for OpenClaw Agents

2026-02-19
CNHI News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw autonomous AI agents) and their security risks, but no realized harm or incident is described. The article highlights potential vulnerabilities and the introduction of a security product to manage these risks. Therefore, it fits the definition of Complementary Information as it provides context and governance responses to AI hazards rather than reporting an AI Incident or AI Hazard itself.
Thumbnail Image

Meta and Others Restrict OpenClaw While Some Startups Embrace the Controversial AI Tool

2026-02-20
Trending Topics
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent) whose use is restricted by companies due to security and data protection concerns, indicating a plausible risk of harm to critical infrastructure or data. The bans and cautious approaches reflect a recognition of potential hazards. Since no actual harm or incident is reported, but the risk is credible and recognized, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. The article is not merely general AI news or product launch, but focuses on the risk and mitigation stance regarding OpenClaw's use.
Thumbnail Image

Meta e aziende IA proibiscono OpenClaw ai dipendenti: timori per la sicurezza dei dati

2026-02-20
Multiplayer.it
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of interacting with user files and performing complex tasks, indicating AI system involvement. The event concerns the use of this AI system and the potential for harm through privacy violations and unauthorized data access. While no direct harm has yet occurred, the companies' prohibitions and expert warnings indicate a credible risk that the AI system's use could plausibly lead to incidents involving data breaches or privacy harm. Therefore, this situation fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenClaw spaventa le aziende: perché iniziano a vietarlo e cos'è la "triade letale"

2026-02-20
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
OpenClaw is an autonomous AI system capable of operating within sensitive environments, accessing files, emails, and executing commands. The disclosure of critical vulnerabilities (e.g., CVE-2026-25253) enabling remote code execution and token exfiltration has directly led to security breaches and exposure of thousands of instances. The presence of malware targeting OpenClaw data and corporate bans due to privacy and security risks confirm realized harm to property and corporate environments. The AI system's malfunction and insecure design are pivotal in causing these harms, meeting the criteria for an AI Incident.
Thumbnail Image

OpenClaw, perché le aziende lo vietano mentre OpenAI assume il suo creatore | IlSoftware.it

2026-02-20
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system acting autonomously with access to sensitive data and services. The article details actual security breaches caused by prompt injection attacks and malicious plugins that have led to remote code execution and data theft, which are direct harms to property and enterprise security. The involvement of the AI system in these harms is explicit and central. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenClaw spaventa Meta e non solo, a rischio informazioni dei clienti compresi i dati delle carte di credito

2026-02-20
virgilio.it
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous control capabilities over operating systems. The concerns raised by companies about privacy violations and access to sensitive customer data, including credit card information, indicate a plausible risk of harm. The article does not report any realized harm but focuses on the potential dangers and preventive measures taken by companies. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving privacy breaches and data security violations.
Thumbnail Image

Perchè OpenClaw è una IA molto pericolosa per la gente comune e viene bloccata dalle stesse aziende ICT

2026-02-22
Business online
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with agentic autonomy performing complex tasks that impact users' digital and physical environments. The article mentions actual incidents and potential abuses linked to OpenClaw, indicating realized harms related to security and privacy. The blocking by major ICT companies is a direct consequence of these harms. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to significant harms, including threats to user security and privacy, which are harms to persons and communities. The discussion of regulatory responses and cybersecurity challenges further supports the classification as an incident rather than a mere hazard or complementary information.