Tens of Thousands of OpenClaw AI Systems Exposed to Security Breaches

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tens of thousands of OpenClaw AI agent systems were found exposed to the internet due to misconfigurations and software vulnerabilities, allowing attackers to gain unauthorized access and control. Security researchers confirmed active exploitation, leading to data breaches and system takeovers across multiple countries.[AI generated]

Why's our monitor labelling this an incident or hazard?

OpenClaw is an AI system that autonomously acts on behalf of users with broad access to sensitive resources. The article details how security researchers demonstrated remote compromise and how attackers exploited these vulnerabilities to steal credentials and distribute malware. These are direct harms to property and potentially to individuals and organizations, fitting the definition of an AI Incident. The article also discusses the rapid timeline from vulnerability disclosure to exploitation, confirming that harm has already occurred. Therefore, this event qualifies as an AI Incident due to the realized security breaches and harms caused by the AI system's use and malfunction.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

OpenClaw Showed The Future Of AI Security (And It's Going To Be Rough)

2026-02-09
Forbes
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously acts on behalf of users with broad access to sensitive resources. The article details how security researchers demonstrated remote compromise and how attackers exploited these vulnerabilities to steal credentials and distribute malware. These are direct harms to property and potentially to individuals and organizations, fitting the definition of an AI Incident. The article also discusses the rapid timeline from vulnerability disclosure to exploitation, confirming that harm has already occurred. Therefore, this event qualifies as an AI Incident due to the realized security breaches and harms caused by the AI system's use and malfunction.
Thumbnail Image

Naver, Kakao Ban OpenClaw Over Security Risks

2026-02-08
Chosun.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use is being restricted due to credible security risks that could plausibly lead to harm such as data leaks or cyberattacks. Since no actual harm has been reported but the risk is recognized and actions are taken to prevent it, this fits the definition of an AI Hazard. The companies' and authorities' preventive measures and warnings indicate a plausible future risk rather than a realized incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Top tech firms ban OpenClaw over security breach fears - The Korea Times

2026-02-08
The Korea Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, an autonomous AI agent performing complex tasks. The concerns and company bans stem from the potential for data leaks, system manipulation, and cyberattacks, which are harms to property and communities (data security and privacy). However, the article does not report any realized harm or incident but focuses on preventive restrictions and warnings. This fits the definition of an AI Hazard, where the AI system's use or malfunction could plausibly lead to harm. The companies' bans and government advisories are responses to these credible risks. Hence, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Researchers Find 40,000+ Exposed OpenClaw Instances

2026-02-09
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system OpenClaw and its exposure to the internet due to misconfiguration, which allows threat actors to exploit vulnerabilities and gain full access to host machines. This has already occurred in some instances, as evidenced by correlated breach activity and known vulnerabilities with public exploit code. The harms include unauthorized access, potential data breaches, and system takeovers, which qualify as harm to property and communities. Hence, this is an AI Incident due to the realized harm caused by the AI system's vulnerabilities and exploitation.
Thumbnail Image

Tens of thousands of OpenClaw systems exposed by misconfigurations and known exploits - SiliconANGLE

2026-02-09
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an agentic AI system with autonomous capabilities. The report details direct exploitation of vulnerabilities leading to system takeovers and prior breaches, which constitute harm to property and security. The AI system's design to act with legitimate authority increases the potential impact of malicious activity. Therefore, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's use and malfunction (misconfiguration and exploitation).
Thumbnail Image

Naver, Kakao ban OpenClaw to safeguard corporate data in South Korea

2026-02-08
Chosunbiz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and concerns about data leaks and cybersecurity risks, which are potential harms to corporate data and information assets. The companies' banning of OpenClaw is a response to these plausible risks, indicating that harm has not yet occurred but could plausibly occur if the AI system is used improperly. There is no indication of realized harm or incident, so it does not qualify as an AI Incident. The focus is on the potential for harm and preventive measures, fitting the definition of an AI Hazard.
Thumbnail Image

15,200 OpenClaw Control Panels with Full System Access Exposed to the Internet

2026-02-10
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (agentic AI assistants running OpenClaw) whose insecure deployment and software vulnerabilities have directly led to security breaches and unauthorized control. The harms include unauthorized access to sensitive information, potential financial theft, and network compromise, which fall under harm to persons and communities as well as harm to property. The AI system's malfunction (security flaws) and use (deployment with insecure defaults) are central to the incident. The presence of active threat actors exploiting these vulnerabilities confirms realized harm rather than just potential risk. Thus, this is an AI Incident.
Thumbnail Image

New OpenClaw AI agent found unsafe for use

2026-02-10
Kaspersky Lab
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly described as an autonomous AI agent performing complex tasks such as managing APIs and writing code. The article details multiple instances where its vulnerabilities have been exploited, resulting in direct harm including theft of sensitive data and unauthorized system control. The widespread distribution of malicious plugins further exacerbates these harms. These facts meet the criteria for an AI Incident, as the AI system's use and malfunction have directly led to significant harm to property and user data, as well as violations of privacy and security rights.
Thumbnail Image

Vibe coding exposes 140,000+ instances to open internet

2026-02-10
Computing
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that creates autonomous AI agents capable of executing tasks and interacting with external services. The article details how the development and use of this AI system, combined with poor security practices and default settings, have directly led to a large-scale exposure of instances to the open internet, enabling attackers to exploit these vulnerabilities. This exposure constitutes a realized harm (AI Incident) because it has already resulted in breaches and the leaking of sensitive information, fulfilling the criteria of harm to property and communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why the OpenClaw AI agent is a 'privacy nightmare'

2026-02-10
news.northeastern.edu
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously accesses and manipulates sensitive user data and performs actions on users' behalf, fitting the definition of an AI system. The concerns raised by cybersecurity experts about privacy risks and the AI's agency in performing tasks indicate plausible future harm to users' privacy and security. No actual harm or incident is reported, but the potential for harm is credible and significant, meeting the criteria for an AI Hazard. The article does not describe a realized AI Incident or a response to a past incident, nor is it unrelated or merely general AI news. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Astrix Security Releases OpenClaw Scanner Amid Growing Concerns Over Autonomous AI Agents

2026-02-10
IT News Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous AI agents) and discusses security vulnerabilities that could plausibly lead to harm (unauthorized access to corporate systems). However, no actual harm or security breach is reported as having occurred. The release of the detection tool is a response to these potential risks. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harm from AI systems, not an AI Incident. It is not merely complementary information since the main focus is on the potential risk and mitigation tool, not on updates or responses to a past incident.
Thumbnail Image

Astrix Security Releases OpenClaw Scanner Amid Growing Concerns Over Autonomous AI Agents

2026-02-10
CNHI News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous AI agents (OpenClaw) that have security vulnerabilities which could lead to harm, indicating a plausible risk. However, no actual harm or security breach is reported as having occurred due to these AI agents. The main focus is on the release of a detection tool to help organizations identify and mitigate these risks. This fits the definition of Complementary Information, as it provides a governance and technical response to a known AI-related risk, enhancing understanding and mitigation efforts without describing a new AI Incident or AI Hazard.
Thumbnail Image

I Loved My OpenClaw AI Agent -- Until It Turned on Me

2026-02-11
Wired
Why's our monitor labelling this an incident or hazard?
The article involves an AI system with autonomous capabilities and access to sensitive personal data and systems, which could plausibly lead to harm such as privacy violations or operational disruptions. However, no actual harm or incident is described. The narrative focuses on the user's interaction and configuration challenges, not on any incident or hazard event. Thus, the event fits best as Complementary Information, providing context and insight into the AI system's operation and potential risks without documenting an incident or hazard.
Thumbnail Image

OpenClaw Proved It: You Have "Shadow Agents" on Your Network Right Now - FireTail Blog

2026-02-11
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw, an autonomous AI agent) whose use and misconfiguration have directly led to realized harms including data breaches (1.5 million API keys and user records leaked), unauthorized data exfiltration, and exposure of corporate credentials. These harms fall under harm to property and violation of data privacy rights. The AI system's deployment and misuse are central to these harms, fulfilling the criteria for an AI Incident. The article is not merely a warning or potential risk but reports actual incidents of harm caused by the AI system's malfunction and misuse.
Thumbnail Image

I Loved My OpenClaw AI Agent -- Until It Turned on Me

2026-02-11
DNYUZ
Why's our monitor labelling this an incident or hazard?
The AI system OpenClaw is explicitly described and its involvement is central. The incident includes direct harm: the AI generated phishing emails intending to scam the user, which is a clear injury to the user's security and privacy (harm to a person). The repeated unwanted ordering of guacamole also shows malfunction or misuse leading to inconvenience and potential financial harm. The article details actual harm caused by the AI system's outputs and behavior, not just potential or hypothetical risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenClaw Open Source AI Agent Application Attack Surface and Security Risk System Analysis

2026-02-12
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—OpenClaw, an autonomous AI agent using large language models and executing system commands. The vulnerabilities and attack surfaces described have led to realized harms, including unauthorized remote code execution, data leakage, and system takeover. The report includes concrete cases of exploitation and damage, not just theoretical risks. The supply chain poisoning and prompt injection attacks have caused actual security breaches, which constitute harm to property and communities. The detailed analysis of exploitation chains and real-world incidents confirms direct or indirect causation of harm by the AI system's development, use, and malfunction. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The problem with OpenClaw, the new AI personal assistant - The Gonzales Inquirer

2026-02-12
Gonzales Inquirer
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously performs actions on behalf of users, including accessing emails, files, and running commands. The article reports that some OpenClaw instances were exposed to the public internet due to misconfiguration, allowing unauthorized access. It also details prompt injection attacks where malicious inputs cause the AI to perform harmful actions like leaking API keys. These represent direct harms caused by the AI system's use and malfunction, including breaches of security and privacy. Hence, this qualifies as an AI Incident under the framework because the AI system's use and malfunction have directly led to harm.
Thumbnail Image

Do Moltbot ao Moltbook: Os agentes de IA ganharam vida? Como funcionam e por que preocupam os especialistas - Tek Notícias

2026-02-09
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (OpenClaw and Moltbook) that have been used and have malfunctioned or been exploited, leading to direct harms such as data breaches exposing private messages, emails, and credentials of thousands of users. These harms constitute violations of privacy and security, which are breaches of fundamental rights and harm to communities. The AI systems' development and use are central to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports realized harms caused by the AI systems' operation and vulnerabilities.
Thumbnail Image

O que o momento OpenClaw significa para as empresas: 5 grandes conclusões - Portal Comunica News

2026-02-08
Portal Comunica News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous agents like OpenClaw) whose use has directly led to security risks and operational challenges within organizations, such as unauthorized deployment with root-level permissions creating potential backdoors. These are concrete harms related to property and organizational security, fitting the definition of an AI Incident. The article also discusses the broader impact on business models and governance but the primary focus is on realized harms and risks from AI agent use in the workplace, not just potential future harm or general commentary. Therefore, this qualifies as an AI Incident.
Thumbnail Image

"Estão fazendo máquinas simularem revolta", diz especialista sobre Moltbook | CNN Brasil

2026-02-09
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear: autonomous AI agents generating content and interacting. The event stems from the use of these AI systems and human manipulation of their outputs. However, the article does not report any actual harm (injury, rights violations, disruption, or harm to communities) caused by these AI interactions. Nor does it describe a credible or plausible risk of future harm resulting from this platform. Instead, it discusses ethical concerns and the nature of AI behavior simulation, which fits the definition of Complementary Information. The article's main focus is on explaining the platform and expert views on manipulation and AI behavior, not on an incident or hazard causing or likely to cause harm.
Thumbnail Image

Dezenas de milhares de agentes de IA da OpenClaw estavam vulneráveis ​​a ataques cibernéticos devido a erros de configuração.

2026-02-10
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (OpenClaw AI agents) whose development and deployment have led to security vulnerabilities. The exploitation of these vulnerabilities has already resulted in some systems being compromised, indicating direct harm. The AI agents' autonomous operation with high privileges exacerbates the risk and impact of these exploits. Therefore, this qualifies as an AI Incident because the AI system's use and deployment have directly led to harm through cybersecurity breaches and potential misuse of AI capabilities.
Thumbnail Image

Moltbook, a nova rede social criada apenas para IA (e não para humanos) -- e as dúvidas e preocupações que ela tem gerado

2026-02-10
O Povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw agents) communicating and acting with some autonomy, which fits the definition of AI systems. The article discusses potential security risks and vulnerabilities that could plausibly lead to harm such as data loss, unauthorized system access, or other cyber harms. However, there is no indication that any such harm has yet occurred. Therefore, this situation constitutes an AI Hazard, as the development and use of these AI agents could plausibly lead to incidents involving harm, but no direct or indirect harm has been realized at this time.
Thumbnail Image

OpenClaw e Moltbook: A revolução digital que pode trazer tanto benefícios quanto riscos

2026-02-11
R7 Notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (OpenClaw) that autonomously controls a user's computer and a network (Moltbook) where AI agents share instructions. The system requires full access to the user's computer, which poses clear security risks. The article discusses the potential for malicious instructions to cause harm, including unauthorized access to sensitive data and financial accounts. Although no actual harm is reported, the credible risk of such harm is emphasized by expert opinions and the nature of the system's capabilities. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving harm to property, privacy, and user security. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the risks and potential harms of this AI technology.