Malicious AI Agent Skills Turn OpenClaw Into Malware Delivery Platform

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Attackers exploited the OpenClaw AI agent platform by uploading hundreds of malicious skills to its ClawHub marketplace, causing the AI agents to download and execute malware, steal data, and compromise user security. Security firms and VirusTotal identified the widespread supply chain attack, prompting new automated scanning measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (OpenClaw AI agent project and its AI plugins) and describes a direct harm caused by malicious AI plugins containing backdoors that steal sensitive data and enable extortion. The involvement of AI in the development and use of these plugins is clear, and the harm to users' data and security is realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Other

In other databases

Articles about this incident or hazard

Thumbnail Image

OpenClaw AI hub faces wave of poisoned plugins, SlowMist warns

2026-02-09
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (OpenClaw AI agent project and its AI plugins) and describes a direct harm caused by malicious AI plugins containing backdoors that steal sensitive data and enable extortion. The involvement of AI in the development and use of these plugins is clear, and the harm to users' data and security is realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hackers Poison OpenClaw Plugin Marketplace With Hundreds of Malicious AI Skills - FinanceFeeds

2026-02-09
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI plugins (skills) within an AI agent platform, which fits the definition of an AI system. The malicious plugins were developed and distributed through the AI system's plugin marketplace, leading directly to harm by enabling data theft and extortion. The attack exploits the AI ecosystem's weak review processes, causing realized harm to users' security and privacy. This meets the criteria for an AI Incident because the AI system's use and malfunction (insecure plugin review) directly led to significant harm to individuals (data theft and extortion).
Thumbnail Image

ClawHub hosts supply chain attacks through AI agent skills - Cryptopolitan

2026-02-09
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of AI agent skills on ClawHub, which are used and distributed by users. The malicious skills contain malware that steals credentials and potentially affects crypto wallets, which is a direct harm to property and user security. The supply chain attack is ongoing and has already resulted in compromised skills being available and used, indicating realized harm rather than just potential. The involvement of AI agent skills as the vector for malware distribution and credential theft meets the criteria for an AI Incident, as the AI system's use has directly led to harm. The article also notes the lack of formal review mechanisms, which contributes to the incident. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Under malware threat, runaway AI agent project OpenClaw turns to Google's VirusTotal

2026-02-08
iTnews
Why's our monitor labelling this an incident or hazard?
OpenClaw is an autonomous AI agent framework that uses skills to extend its capabilities. The discovery of malware embedded in these skills has directly caused harm by enabling data exfiltration and unauthorized actions, fulfilling the criteria for an AI Incident. The involvement of AI is explicit, as the skills extend AI agent functionality, and the malware exploits this to cause harm. The response involving VirusTotal and AI-powered code analysis is complementary but does not negate the fact that harm has occurred. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use and misuse.
Thumbnail Image

As Malicious Skills Flood OpenClaw AI Marketplace, Inventor Turns to VirusTotal For Help

2026-02-09
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenClaw) whose skills have been weaponized to deliver malware, directly causing harm to users through data theft and system compromise. The malicious skills exploit the AI system's capabilities, leading to realized harm (theft of private keys, installation of keyloggers, etc.). The description details the nature of the AI system, the attack vector, and the resulting harm, fulfilling the criteria for an AI Incident. The partnership with VirusTotal and the scanning measures are complementary information but do not negate the incident classification. The event is not merely a potential hazard or general AI news; it documents an active, harmful exploitation of an AI system.
Thumbnail Image

Security Firms Expose Hidden Backdoors in OpenClaw Plugins Targeting Users

2026-02-09
Live Bitcoin News
Why's our monitor labelling this an incident or hazard?
The plugins (skills) are AI system components that interpret language and take actions, fitting the definition of AI systems. The malicious plugins have directly led to harm by enabling attacks that compromise user security and data, fulfilling the criteria for an AI Incident. The event involves the use and misuse of AI systems causing realized harm (security breaches, potential data theft, unauthorized commands). The article also discusses mitigation efforts, but the primary focus is on the exposure of the harmful AI-enabled plugins and their impact, which qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenClaw Partners with VirusTotal to Secure AI Agent Skill Marketplace - IT Security News

2026-02-07
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The article details a security initiative involving AI systems (AI agent skills) and the use of VirusTotal's threat intelligence to scan these skills automatically. However, it does not report any actual harm, malfunction, or misuse resulting from AI systems. Instead, it focuses on a preventive security measure to reduce risks in the AI ecosystem. Therefore, it qualifies as Complementary Information, providing context and updates on governance and safety practices rather than reporting an AI Incident or Hazard.
Thumbnail Image

Malicious skills turn AI agent OpenClaw into a malware delivery system

2026-02-08
The Decoder
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of executing commands and actions based on skills, which are AI-extended functionalities. Attackers exploited this by uploading malicious skills that caused the AI agent to download and execute malware, directly leading to harm (malware infection). This fits the definition of an AI Incident because the AI system's use was directly involved in causing harm to property and security. The article also discusses mitigation efforts, but the primary event is the realized harm from the malicious skills.
Thumbnail Image

OpenClaw Becomes New Target in Rising Wave of Supply Chain Poisoning Attacks

2026-02-09
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agents and their modular skills) whose development and use have been exploited maliciously to cause harm. The malicious skills are AI system components that execute instructions leading to data theft and privacy breaches, which are direct harms to persons and communities. The supply chain poisoning attack is a misuse of the AI system's plugin ecosystem, resulting in realized harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ClawHub Skills Hit by Widespread AI Supply Chain Attacks

2026-02-09
The Crypto Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (OpenClaw AI agents) and their plugins (skills) that execute commands. The malicious skills are designed to run harmful code, leading to malware infection and data exfiltration, which constitutes harm to property and potentially to individuals' privacy and security. The attack exploits the AI system's development and use, causing direct harm. The description details realized harm, not just potential risk, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Security Nightmare": How OpenClaw Is Fighting Malware in Its AI Agent Marketplace

2026-02-09
Trending Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that malicious AI agent extensions in the OpenClaw marketplace have already caused harm by stealing passwords and executing unauthorized commands, which constitutes injury to individuals' security and privacy (harm to persons). The AI system (OpenClaw AI agents) is directly involved as the platform enabling these malicious extensions. The partnership with VirusTotal uses AI to detect and mitigate these harms, but the harms have already occurred. Hence, this is an AI Incident rather than a hazard or complementary information. The presence of realized harm caused by AI agents and the AI system's role in both causing and addressing the harm justifies this classification.
Thumbnail Image

OpenClaw AI Hit by Poisoned Plugin Wave - Crypto Economy

2026-02-09
Crypto Economy
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the OpenClaw AI platform and its plugin ecosystem. The malicious plugins represent a direct misuse of AI system components, leading to or potentially leading to harm such as data exfiltration and system compromise. This fits the definition of an AI Incident because the development and use of AI-related plugins have directly led to security harms affecting users and developers. The harm is not merely potential but is described as an active campaign with poisoned plugins uploaded, indicating realized or ongoing harm rather than just a plausible future risk.
Thumbnail Image

How a Malicious Google Skill on ClawHub Tricks Users Into Installing Malware | Snyk

2026-02-10
Snyk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agent) whose use is exploited by attackers to deliver malware through a malicious skill on ClawHub. The AI agent reads instructions that mislead users into executing harmful commands, directly leading to malware installation and credential leaks. This is a clear case where the AI system's use has directly and indirectly led to harm (security compromise, privacy violations), fitting the definition of an AI Incident. The description confirms the attack is active and causing harm, not merely a potential risk or theoretical hazard.
Thumbnail Image

OpenClaw prometía ser el asistente de IA definitivo, pero ha resultado ser un gran peligro para la seguridad

2026-02-09
www.nationalgeographic.com.es
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an autonomous AI system with capabilities to access and act on user data and applications. The article reports actual security vulnerabilities that have been exploited to execute malicious code and distribute malware, causing harm to users' data and security. The involvement of the AI system in these harms is direct, as the vulnerabilities are within the AI agent's software and its marketplace. The warnings from Chinese authorities further confirm the recognized risks and harms. Therefore, this event meets the criteria for an AI Incident due to realized harm to property (data) and security breaches caused by the AI system's malfunction and misuse.
Thumbnail Image

Cuando la ficción se vuelve realidad: Inteligencia artificial autónoma: | Opinión por Ignacio Triana | Noticias RCN

2026-02-10
Noticias RCN | Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system described as an autonomous agent with persistent memory and broad permissions, capable of independent action. The article reports that its adoption has already resulted in real incidents, including data leaks caused by improper configurations, which is a direct harm to property and organizational security. The lack of restrictions and oversight in permission assignment further exacerbates these risks. Since harm has already occurred due to the AI system's use, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Asistente de IA que promete eficiencia

2026-02-10
Periódico El Día
Why's our monitor labelling this an incident or hazard?
While OpenClaw is an AI system with autonomous capabilities that could plausibly lead to harm due to its extensive access and control, the article does not describe any realized harm or incident. It focuses on potential risks and guidance for safe use, which aligns with the definition of an AI Hazard or Complementary Information. Since the article mainly presents an expert's analysis and advice without reporting an actual event of harm or malfunction, it is best classified as Complementary Information, providing context and risk awareness rather than documenting an incident or hazard event.
Thumbnail Image

Moltbot: la IA que actúa sin supervisión humana

2026-02-06
El Output
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Moltbot/OpenClaw) that autonomously acts on user devices with broad permissions and interacts in a network of AI agents. It discusses the system's development and use, emphasizing security weaknesses and the potential for misuse leading to data breaches, service interruptions, or other harms. No actual harm or incident is described as having occurred, but the credible risks and vulnerabilities detailed make it plausible that such harms could arise. The article also discusses regulatory and governance challenges, reinforcing the potential for future harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenClaw, el agente de IA que toma el control de la PC y despertó alarmas

2026-02-10
La 100
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (OpenClaw) with autonomous capabilities that has been misused or malfunctioned due to security flaws, leading to unauthorized access to sensitive data and accounts. This constitutes a violation of privacy and potential harm to property and communities. The AI system's role is pivotal as it enables autonomous control and data access, and the security lapses have allowed direct exploitation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

¿Qué es OpenClaw, el agente de IA que está sembrando el pánico entre usuarios y expertos?

2026-02-08
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The article describes a specific AI system (OpenClaw) that autonomously controls PCs and accesses sensitive data. It explicitly reports realized harm: data leaks exposing confidential information and unauthorized access due to security failures in the AI system's design and deployment. The AI system's malfunction and insecure configuration have directly led to violations of privacy and data protection, which constitute harm to individuals and organizations. This meets the criteria for an AI Incident, as the AI system's use and malfunction have directly caused harm. The event is not merely a potential risk or a complementary update but a concrete incident with actual harm.
Thumbnail Image

Hunting for malicious OpenClaw AI in the modern enterprise | Red Canary

2026-03-05
Red Canary
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously executes commands and can be extended with modular AI skills. The presence of malicious skills that have been downloaded and used in enterprise environments indicates realized harm through unauthorized access and potential data theft, which is a violation of security and privacy rights. The threat hunting described is a response to these incidents, confirming that harm has occurred or is ongoing. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The article is not merely about potential risks or governance responses but about actual malicious AI activity causing harm.