Critical AI System Vulnerabilities in OpenClaw and Langflow Lead to Security Risks and Exploitation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

360 Security discovered and reported a zero-day vulnerability in OpenClaw's intelligent agent gateway, confirmed by its founder, allowing attackers to bypass authentication and potentially crash systems. Separately, Langflow's API flaw enabled remote code execution, actively exploited within 20 hours of disclosure, causing unauthorized access and data theft. Both incidents highlight urgent AI security challenges.[AI generated]

Why's our monitor labelling this an incident or hazard?

The OpenClaw Gateway is an AI-related system (smart agent gateway) whose security vulnerability allows attackers to bypass authentication and gain control, potentially causing system resource exhaustion or crashes. This constitutes a direct risk of harm to property or system integrity. Since the vulnerability is confirmed and exploitable, and the article reports on the discovery and confirmation of this high-risk flaw, it qualifies as an AI Incident due to the realized security risk and potential harm stemming from the AI system's malfunction or misuse.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessConsumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

【AI】OpenClaw創始人確認360團隊發現高危漏洞,攻擊者可繞過認證獲取控制權

2026-03-23
ET Net
Why's our monitor labelling this an incident or hazard?
The OpenClaw Gateway is an AI-related system (smart agent gateway) whose security vulnerability allows attackers to bypass authentication and gain control, potentially causing system resource exhaustion or crashes. This constitutes a direct risk of harm to property or system integrity. Since the vulnerability is confirmed and exploitable, and the article reports on the discovery and confirmation of this high-risk flaw, it qualifies as an AI Incident due to the realized security risk and potential harm stemming from the AI system's malfunction or misuse.
Thumbnail Image

智能体网关风险!OpenClawc创始人回信确认360发现独家漏洞:攻击者可绕过权限获取

2026-03-22
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system component (intelligent agent gateway) and a confirmed zero-day vulnerability that allows attackers to bypass permissions and gain control, which can lead to system crashes or resource exhaustion. This constitutes harm to property and disruption of system operation, fulfilling the criteria for an AI Incident. The involvement of the AI system's malfunction (security vulnerability) directly leads to potential harm, and the event describes realized risk rather than just a potential hazard. Hence, it is classified as an AI Incident.
Thumbnail Image

关键Langflow漏洞CVE-2026-33017在披露20小时内引发攻击

2026-03-23
net.zhiding.cn
Why's our monitor labelling this an incident or hazard?
Langflow is an AI system platform, and the vulnerability involves its API endpoint that executes attacker-controlled Python code without authentication, leading to remote code execution. The active exploitation causing unauthorized access, data theft, and potential system compromise clearly meets the definition of an AI Incident, as the AI system's malfunction and misuse have directly led to significant harm. The rapid weaponization and exploitation further confirm realized harm rather than just potential risk.
Thumbnail Image

OpenClaw创始人回信确认360发现漏洞 360筑牢全球安全防线

2026-03-22
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) with autonomous capabilities and discusses a discovered security vulnerability that could lead to harms such as data leakage and malicious control. While these harms are serious, the article does not report that these harms have actually occurred but rather that 360 has discovered the vulnerability and is actively mitigating risks through a comprehensive security service. This fits the definition of Complementary Information, as it updates on responses to potential AI-related harms and enhances understanding of AI ecosystem safety without describing a realized AI Incident or a plausible imminent AI Hazard.
Thumbnail Image

OpenClaw创始人回信确认360独家发现漏洞_产业经济_财经_中金在线

2026-03-22
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw intelligent agent platform) and a security vulnerability that could lead to significant harm (system resource exhaustion or crash) if exploited. The vulnerability has been confirmed by the system's original author and reported to a national security platform, indicating a real and present risk. Since the vulnerability has been discovered and confirmed but the article does not report actual exploitation causing harm yet, this constitutes a plausible risk of harm rather than a realized harm incident. Therefore, this event qualifies as an AI Hazard rather than an AI Incident. The article also includes information about mitigation and security strategies, but the main focus is on the vulnerability and its potential risks, not on a resolved incident or governance response alone.
Thumbnail Image

OpenClaw创始人回信确认三六零独家发现漏洞-证券之星

2026-03-22
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved here as the OpenClaw Gateway is part of an intelligent agent system (referred to as '智能体' or intelligent agent) that executes tasks beyond simple dialogue, implying AI capabilities. The zero-day vulnerability in the WebSocket interface allows attackers to bypass authentication and gain control, which could lead to system crashes or resource exhaustion, constituting harm to property and system integrity. Since the vulnerability has been discovered and confirmed, and mitigation efforts are underway, this event involves an AI system's malfunction or security flaw that has directly led to a significant risk of harm. Therefore, it qualifies as an AI Incident due to the realized security vulnerability and its potential to cause harm if exploited.
Thumbnail Image

AI淪為間諜?Claude爆重大資安漏洞,小心AI代理人也出漏洞 - 網路資訊雜誌

2026-03-22
網路資訊雜誌
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude.ai) whose malfunction or exploitation (prompt injection vulnerability) has directly led to harm by enabling unauthorized access to sensitive information, constituting a breach of security and privacy rights. The harm is realized, not just potential, as attackers have been able to extract confidential data. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm linked to the AI system's use and security flaws.