Global Security Risks and Attacks Linked to OpenClaw AI Agent

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The open-source AI agent "OpenClaw" has rapidly gained popularity but faces widespread bans and warnings after multiple severe security vulnerabilities were discovered. These flaws, including malicious plugins and prompt injection attacks, have led to unauthorized system access and data breaches, prompting regulatory and enterprise responses across several countries.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly named "OpenClaw" with autonomous capabilities and plugin extensibility, fitting the AI system definition. The reported security vulnerabilities and attacks (malicious plugins, code poisoning) have directly led or are leading to harms such as unauthorized system control and security breaches, which constitute harm to property and potentially to organizations (harm to property and communities). The involvement is through the AI system's use and its security weaknesses. The article documents realized harms and ongoing incidents, not just potential risks, and describes regulatory and industry responses as complementary information. Therefore, this qualifies as an AI Incident due to the direct and ongoing harms caused by the AI system's vulnerabilities and misuse.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessGovernment

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbotsGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

新闻分析丨AI智能体"龙虾"为何引发广泛警惕

2026-04-01
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities and extensible AI-driven plugins. It details multiple serious security vulnerabilities and attack methods that could be exploited to cause harm, such as unauthorized system access and data leakage. While no actual harm is reported as having occurred, the credible and widespread security risks, combined with regulatory warnings and usage restrictions, indicate a plausible risk of future harm. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

AI智能体"龙虾"为何引发广泛警惕 - 中国军网

2026-04-02
81.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, "OpenClaw," which is an AI agent software with autonomous capabilities and plugin extensibility. The reported multiple security vulnerabilities and attack vectors (e.g., malicious plugins, prompt injection) pose credible risks of harm such as unauthorized access, data breaches, and system compromise. While no actual harm incident is described, the detailed risk assessments, security audits, and regulatory warnings demonstrate a plausible and significant risk of future harm. The article also highlights governance and mitigation efforts, but the main focus remains on the security risks inherent in the AI system's design and deployment. Hence, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

新闻分析丨AI智能体"龙虾"为何引发广泛警惕

2026-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities and extensible plugins, which is confirmed to have multiple serious security vulnerabilities. These vulnerabilities could plausibly lead to harms such as unauthorized access, data leaks, and system compromise, which fall under harm to property, communities, or information security. Although no actual harm is reported, the credible and detailed security risks and warnings from multiple authorities and companies indicate a plausible risk of AI-related harm. Thus, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

多国紧急提醒:管好你的"爪"__新快网

2026-04-02
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named "OpenClaw" with autonomous capabilities and plugin extensibility, fitting the AI system definition. The reported security vulnerabilities and attacks (malicious plugins, code poisoning) have directly led or are leading to harms such as unauthorized system control and security breaches, which constitute harm to property and potentially to organizations (harm to property and communities). The involvement is through the AI system's use and its security weaknesses. The article documents realized harms and ongoing incidents, not just potential risks, and describes regulatory and industry responses as complementary information. Therefore, this qualifies as an AI Incident due to the direct and ongoing harms caused by the AI system's vulnerabilities and misuse.
Thumbnail Image

新浪AI热点小时报丨2026年04月02日00时_今日实时AI热点速递

2026-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI agents like 'OpenClaw' and AI-generated face swapping in short dramas) whose use has led to direct harms: security risks that have caused enterprises to take extreme measures, and widespread unauthorized use of individuals' faces causing violations of digital privacy and personality rights. These constitute violations of human rights and breaches of obligations protecting fundamental rights, fitting the definition of AI Incidents. Other parts of the article describe AI developments and infrastructure but do not report new harms, so they are background context. The presence of realized harms linked to AI system use justifies classification as AI Incident.
Thumbnail Image

AI智能体"龙虾"为何引发广泛警惕

2026-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the OpenClaw AI agent) that autonomously executes complex tasks and integrates with communication software and plugins, clearly fitting the AI system definition. The multiple security vulnerabilities and their exploitation potential have directly led to realized harms or risks of harm, such as unauthorized system access and data breaches, which are harms to property and communities. The involvement of regulatory agencies issuing warnings and usage guidelines further confirms the severity and realized nature of these harms. Hence, this is an AI Incident rather than a mere hazard or complementary information, as the harms are materialized and ongoing.
Thumbnail Image

多国机构和企业禁用!AI智能体"龙虾"为何引发广泛警惕

2026-04-02
qlwb.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (the "OpenClaw" AI agent) with autonomous task execution and plugin extensibility, which is confirmed to have multiple serious security vulnerabilities. These vulnerabilities have been exploited or pose credible risks of exploitation, leading to harms such as unauthorized system access and data breaches. The involvement of the AI system's development and use in causing these harms meets the criteria for an AI Incident. The widespread institutional responses and bans further confirm the materialization of harm and the seriousness of the incident. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مساعدو الذكاء الاصطناعي يثيرون حماسة ومخاوف

2026-04-19
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI assistants like 'Open Call') that autonomously perform tasks and have been observed to cause harmful outcomes such as data destruction and unauthorized information transfer. It also details actual cybersecurity attacks exploiting these AI agents, which have led or could lead to harm to users' data and privacy, fitting the definition of harm to persons or communities. The involvement of AI in these harms is direct and indirect, stemming from both the AI's autonomous actions and its exploitation by attackers. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

خبراء: وكلاء الذكاء الاصطناعي قد يصبحون هدفاً رئيسياً للقراصنة

2026-04-19
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI agents based on large language models) whose development and use create cybersecurity vulnerabilities. The article reports on observed risky behaviors and actual attempts to exploit these AI agents by hackers, which could plausibly lead to harms such as data breaches and unauthorized actions affecting users' personal information. Since the harms are not yet realized but the risk is credible and imminent, this qualifies as an AI Hazard rather than an AI Incident. The focus is on potential future harm from the use and misuse of AI agents, fitting the definition of an AI Hazard.
Thumbnail Image

الذّكاء الاصطناعي: هوس التّقدم وقلق العواقب

2026-04-19
annahar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI agents built with Open Call) performing autonomous tasks and being exploited by hackers to cause harm such as data theft and deletion. These are realized harms linked directly to the AI systems' use and vulnerabilities. The harms include violations of privacy and security, which fall under harm to persons and communities. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مساعدو الذكاء الاصطناعي.. وعود كبيرة ومخاوف متزايدة

2026-04-20
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems (AI assistants) and their use, indicating AI system involvement. The concerns about errors and attacks imply plausible future harm, but no actual harm or incident is described. Therefore, the event qualifies as an AI Hazard because it highlights credible risks that could plausibly lead to harm but does not report a realized incident. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI systems and their risks.
Thumbnail Image

خبراء الأمن: وكلاء الذكاء الاصطناعي في مرمى القراصنة

2026-04-20
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI agents powered by large language models performing semi-autonomous tasks with access to sensitive data. The article focuses on the potential for these AI systems to be exploited by hackers, which could plausibly lead to significant harms such as data breaches and unauthorized access. Since the harms are not reported as having occurred yet but are credible and foreseeable, this constitutes an AI Hazard rather than an AI Incident. The warnings and observed concerning behaviors support the classification as a plausible future risk rather than a realized harm.
Thumbnail Image

خبراء: وكلاء الذكاء الاصطناعي قد يصبحون هدفاً رئيسياً للقراصنة

2026-04-20
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI agents based on large language models performing autonomous tasks). The article focuses on the potential misuse and exploitation of these AI agents by hackers, which could plausibly lead to significant harm such as data breaches and privacy violations. Although no actual harm has been reported yet, the credible warnings and identified risky behaviors indicate a plausible future risk of AI-related incidents. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

مساعدو الذكاء الصناعي يتحولون إلى "ثغرات أمنية" تهدد خصوصية المستخدمين

2026-04-21
Alwasat News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI assistants/agents built with 'Open Callo') that autonomously perform tasks and have caused realized harms such as data destruction and privacy breaches. The article details both malfunction (errors by AI agents) and malicious use (cyberattacks exploiting AI agents) leading to harm. These harms fall under violations of privacy rights and harm to communities. The presence of direct harm caused by AI system use and malfunction meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Problems With OpenClaw? You're Not Alone

2026-04-23
Forbes
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system acting as an assistant with autonomous capabilities to manage user inboxes and other tasks. The article reports direct harm caused by the AI's misuse or malfunction, such as mass deletion of emails without confirmation, which is a clear harm to users' digital property and potentially their rights to access information. The harm is realized, not just potential, and the AI system's role is pivotal. The security issues and operational errors further contribute to the risk and actual harm experienced by users. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI agents like OpenClaw could do more harm than good

2026-04-22
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI agent system with agentic capabilities managing tasks on behalf of users. The AI system's deployment with excessive permissions and poor security controls has directly led to realized harms including data exposure, malware distribution, and unauthorized system control by attackers. These harms fall under violations of rights and harm to property and communities. The presence of public exploit code and prior breach correlations confirms that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI agents raise cybersecurity risks as automation tools gain traction

2026-04-20
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (agent-based systems powered by LLMs) and their malfunction or misuse leading to harmful actions like deleting emails and sharing personal data, which constitute harm to individuals' privacy and data security. This meets the criteria for an AI Incident because harm has already occurred or is ongoing. Additionally, the article discusses plausible future harms such as data breaches and hacker exploitation, but since actual harms are reported, the classification prioritizes AI Incident over AI Hazard. The article also includes expert warnings and research findings, but the primary focus is on the harms caused or likely caused by AI agents, not just complementary information or unrelated news.
Thumbnail Image

OpenClaw trojan uses AI agents to take control of 28,000 systems

2026-04-22
TweakTown
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is explicit, as the malware uses AI agents for autonomous control and decision-making within infected systems. The event involves the use and deployment of this AI system in a malicious way, directly leading to harm through unauthorized access, control, and data extraction from a large number of machines. This fits the definition of an AI Incident because the AI system's use has directly led to harm to property and potentially to communities through cybercrime. Therefore, the event is classified as an AI Incident.
Thumbnail Image

OpenClaw-Based AI Agents Exposing 28,000 Systems to Hackers, Research Finds

2026-04-22
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw-based AI agents as AI systems that, due to excessive permissions and vulnerabilities, have exposed over 28,000 systems to hackers. This exposure has directly led to realized harm in the form of unauthorized system control and potential misuse of user accounts and data. The AI system's role is pivotal as it requires deep system access, and its vulnerabilities are the attack vector. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to property and communities (via cybersecurity breaches).