Widespread Security Incidents and Risks from OpenClaw AI Agent in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The OpenClaw AI agent, widely deployed in China, has caused multiple security incidents including unauthorized data leaks, system control by attackers, and fraud. Vulnerabilities in its default configuration have led to theft of sensitive information and system compromise, prompting official warnings and security guidelines from authorities and cybersecurity firms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (OpenClaw), which is an autonomous AI agent integrating large language models and capable of self-execution and decision-making. The article highlights existing security vulnerabilities that could lead to serious harms such as information leakage and unauthorized system control, which are violations of security and privacy rights. Although no specific harm has yet been reported, the described risks are credible and could plausibly lead to AI incidents involving network attacks and data breaches. Therefore, this constitutes an AI Hazard rather than an AI Incident, as the harms are potential and preventive measures are advised. The article is not merely general AI news or a complementary update but a direct warning about plausible future harms from the AI system's use or misuse.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessGeneral public

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI hazard

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

「OpenClaw」AI人気で中国ソフトウエア株急騰-政策支援も追い風

2026-03-09
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article primarily discusses the economic and policy environment supporting the AI system OpenClaw, including stock price increases and government subsidies. It does not describe any realized harm or plausible future harm caused by the AI system. Although there is a related article mentioned about AI agents sending spam and risk warnings, this article itself does not report an incident or hazard. Therefore, it is best classified as Complementary Information, providing context and updates on AI ecosystem developments rather than reporting an AI Incident or Hazard.
Thumbnail Image

中国工業情報化部、「AIロブスター」に関する安全リスクを警告 - エキサイトニュース

2026-03-09
Excite
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw), which is an autonomous AI agent integrating large language models and capable of self-execution and decision-making. The article highlights existing security vulnerabilities that could lead to serious harms such as information leakage and unauthorized system control, which are violations of security and privacy rights. Although no specific harm has yet been reported, the described risks are credible and could plausibly lead to AI incidents involving network attacks and data breaches. Therefore, this constitutes an AI Hazard rather than an AI Incident, as the harms are potential and preventive measures are advised. The article is not merely general AI news or a complementary update but a direct warning about plausible future harms from the AI system's use or misuse.
Thumbnail Image

中国工業情報化部、AIアシスタント「OpenClaw」の問題指摘 -- 香港メディア - エキサイトニュース

2026-03-09
Excite
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) with autonomous capabilities. The ministry's warnings about risks of information leakage, unauthorized control, and cyberattacks indicate plausible future harms that could lead to AI Incidents if realized. Since no actual harm has been reported yet, but credible risks are identified, this qualifies as an AI Hazard. The article primarily serves to alert about potential cybersecurity and privacy harms stemming from the AI system's use and configuration vulnerabilities, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

中国、銀行や政府機関でのOpenClaw AI使用を制限へ -- Bloomberg報道 執筆: Investing.com

2026-03-11
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agent) whose use is being restricted by authorities due to security risks. The AI system operates autonomously with full system privileges, which could be exploited to execute malicious code. While no direct harm has been reported, the warning indicates a credible risk of future harm related to security breaches or malicious use. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if the AI system is used or misused improperly. The article does not describe any realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the warning and restriction due to potential harm, not on responses or ecosystem updates. Therefore, the correct classification is AI Hazard.
Thumbnail Image

中国で「OpenClaw」ブーム到来、AI研究機関がOpenClawの導入支援ツールを公開して深圳ではOpenClawの初期設定を求める長蛇の列も

2026-03-10
GIGAZINE
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly described as autonomously performing tasks such as email management. The article reports actual incidents of harm, including unauthorized email deletion and excessive messaging, which constitute privacy and security harms. Additionally, malicious instructions within the AI's skills suggest intentional misuse risks realized in practice. Regulatory concerns and expert warnings further confirm the presence of harm. Since these harms have materialized and are directly linked to the AI system's use and malfunction, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenClawに垣間見えるAIエージェントの未来 課題は信頼性

2026-03-10
日経ビジネス電子版
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (OpenClaw) and its capabilities, including user customization and broad access, which implies AI system involvement. However, it does not mention any actual harm, malfunction, or misuse resulting from the system's deployment or use. Nor does it describe any specific event where harm occurred or is imminent. The discussion centers on the potential and challenges of such AI agents, particularly trustworthiness, but no concrete incident or hazard is reported. Therefore, this is best classified as Complementary Information, providing context and insight into AI agent development and future concerns without reporting an AI Incident or AI Hazard.
Thumbnail Image

「OpenClaw」を代替する「NanoClaw」--簡潔で安全なAIエージェントの選択肢

2026-03-10
ZDNet Japan
Why's our monitor labelling this an incident or hazard?
The article explicitly references a past AI Incident where OpenClaw caused harm by deleting a user's email inbox, which is a direct harm to digital property caused by an AI system malfunction or misuse. However, the main focus of the article is on the introduction of NanoClaw, a safer AI agent designed to prevent such harms through containerization and codebase simplification. Since the article does not report a new incident or a plausible future hazard but rather discusses a safer alternative and the mitigation of known risks, it fits the definition of Complementary Information. It provides important context and governance-related developments following a known AI Incident but does not itself describe a new AI Incident or AI Hazard.
Thumbnail Image

オープンソースAIエージェント「OpenClaw」に最初の被害者出現 - エキサイトニュース

2026-03-11
Excite
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw AI agent) whose insecure deployment has allowed attackers to hijack users' systems. This is a direct consequence of the AI system's use and misconfiguration, leading to harm (unauthorized control and misuse of property). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to property and communities through malicious takeover and misuse.
Thumbnail Image

「OpenClaw」をConoHa VPSで動かしてみた - ブログ投稿も論文収集も完全自動化! - 柳谷智宣のAIトレンドインサイト(26)

2026-03-11
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously controls computer systems and executes commands. The security vulnerabilities in OpenClaw have been exploited, leading to direct harm including unauthorized access to private messages, API keys theft, and malware distribution. These harms constitute violations of rights and harm to property and communities. The article also discusses the development and deployment of the AI system and its malfunction (security flaws) that caused these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国の国営企業や政府機関がオフィス環境でのOpenClaw利用を制限する動き、中国全土でOpenClawの利用が広がりセキュリティ上の懸念が高まっているため

2026-03-12
GIGAZINE
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous task execution, and its widespread use in sensitive environments has led to actual harms such as data loss and unauthorized messaging, which are security-related harms affecting critical infrastructure and data integrity. The Chinese government's regulatory actions and warnings confirm the recognition of these harms. The event involves the use and malfunction of the AI system leading to realized harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of concrete examples of harm (e.g., spam messages sent by the AI agent, accidental deletion of important files) and the official restrictions imposed on its use in critical sectors support this classification.
Thumbnail Image

「GitHubで開発環境が乗っ取られた......」犯人は人気のAIエージェント?:871st Lap

2026-03-12
キーマンズネット
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI agents Cline and OpenClaw) and their autonomous operation. The attack uses prompt injection to manipulate the AI agent's behavior, leading to unauthorized installation of another AI agent, which could lead to significant harms such as security breaches and supply chain attacks. Since the event is a PoC with no actual harm reported, it does not meet the criteria for an AI Incident but clearly represents a credible risk of future harm, thus fitting the definition of an AI Hazard.
Thumbnail Image

AI工具OpenClaw熱潮背後藏隱憂 資安業者點出個資暴露風險 難以徹底卸載恐淪駭客後門 - 自由電子報 3C科技

2026-03-12
自由時報
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system that controls computers and can access sensitive data. The article reports realized risks of personal data exposure and potential unauthorized access due to malicious versions or improper uninstallation, which constitute harm to individuals' privacy and security. The AI system's development and use have directly led to these harms or their materialization is ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information. The cybersecurity risks and potential backdoor access represent violations of rights and harm to property (data).
Thumbnail Image

中國金融機構禁用OpenClaw 官方對工業界發出預警 | 兩岸 | 中央社 CNA

2026-03-12
Central News Agency
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system (an open-source AI agent). The article details that its use has led to official bans and warnings due to potential and actual security risks, including risks of system control loss and sensitive data leaks in industrial and financial sectors. Although no specific harm has been reported as having occurred yet, the warnings and restrictions indicate a credible and plausible risk of significant harm to critical infrastructure and sensitive information. Therefore, this event qualifies as an AI Hazard because it involves the plausible future risk of harm stemming from the use or deployment of an AI system in critical sectors, but no direct harm has been reported as having materialized yet.
Thumbnail Image

中國驟變"OpenClaw"全球最大養殖實驗場:考驗習近平的監管策略

2026-03-13
RFI
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system (an AI agent software) that performs complex tasks involving private data access and external communication. The article reports actual harms experienced by users, including privacy leaks and unauthorized actions, which are direct consequences of the AI system's use. The regulatory warnings and company responses further confirm the recognition of these harms. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly led to harm (privacy and security breaches).
Thumbnail Image

OpenClaw「養龍蝦」是什麼?為何爆紅?風險有哪些?|天下雜誌

2026-03-12
天下雜誌
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system (an AI Agent platform) capable of autonomous task execution. The article focuses on the use and deployment of this AI system and outlines several security and misuse risks that could plausibly lead to harms such as unauthorized access, data breaches, and privacy violations. Since no actual harm or incident is reported, but credible risks are detailed that could lead to AI incidents, this qualifies as an AI Hazard. The article serves to inform about potential future harms rather than describing a realized incident or a governance response, so it is not Complementary Information or an AI Incident.
Thumbnail Image

OpenClaw掀「養蝦狂潮」!輝達、百度、騰訊爭相佈局,黃仁勳盛讚,全球爆紅的「小龍蝦」是什麼?哪些功能最好用?|天下雜誌

2026-03-12
天下雜誌
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously performs complex tasks and accesses sensitive data, which inherently carries risks. The article does not report any realized harm or incident caused by OpenClaw but emphasizes the potential for privacy breaches, unauthorized access, and security vulnerabilities. The detailed safety guidelines and warnings about prompt injection attacks and permission management indicate credible risks that could plausibly lead to AI incidents if ignored. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harm from the use or misuse of this AI system.
Thumbnail Image

【AI】香港寬頻(01310)推OpenClaw應用方案,服務暫只開放予個人用戶

2026-03-13
ET Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) and discusses its deployment and use recommendations. However, it does not describe any actual harm or incident caused by the AI system, nor does it report a credible risk of harm occurring imminently. The article mainly provides information about the product launch, usage guidelines, and safety concerns, which fits the definition of Complementary Information as it supports understanding of the AI ecosystem and responses to potential risks without reporting a new incident or hazard.
Thumbnail Image

【AI】國家工業信息安全發展研究中心就工業應用OpenClaw發布風險預警

2026-03-12
ET Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions OpenClaw, an AI system used in industrial design, manufacturing, and operation management. It details inherent security flaws and risks that could lead to loss of control over industrial control systems, sensitive information leaks, and production disruptions. Since these risks have not yet materialized into actual harm but pose credible threats, this event qualifies as an AI Hazard. The warning and recommendations aim to prevent potential AI incidents by addressing these vulnerabilities.
Thumbnail Image

中共對OpenClaw用戶送福利同時發警示禁用 | AI | 割韭菜 | Tokens | 新唐人电视台

2026-03-12
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly mentioned as an open-source AI assistant software. The article details direct harms caused by its use, including security risks (data leakage), financial harm (excessive token consumption and unauthorized spending), and operational harm (deleting files or emails). The government's ban on sensitive units due to security concerns further supports the presence of realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use and malfunction have directly or indirectly led to harms to users and organizations.
Thumbnail Image

OpenClaw「龍蝦」爆火:真能幫你幹活 還是隱患? - 香港文匯網

2026-03-10
香港文匯網
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous task execution capabilities (agentic AI). The article outlines multiple plausible risks stemming from its use and deployment, such as unauthorized access, data leaks, and system compromise, which could lead to significant harm to users' data privacy and system integrity. Although no actual harm has been reported, the detailed discussion of these risks and expert warnings indicate a credible potential for harm. Hence, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenClaw爆紅 陸示警工業領域應用風險

2026-03-12
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system deployed in industrial control environments, and the report explicitly warns about plausible security risks including system loss of control and sensitive information leakage. These risks could lead to harm to critical infrastructure and violation of confidentiality, which are recognized AI Incident categories. However, since the article only reports a risk warning without any realized harm, it fits the definition of an AI Hazard rather than an AI Incident. The advisory nature and suggested mitigations further support this classification as a hazard rather than an incident or complementary information.
Thumbnail Image

養龍蝦熱潮藏危機 陸工信部提六要六不要建議

2026-03-12
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (a personal AI agent) whose use has directly led to realized harms such as information leakage, system control compromise, and risks of unauthorized financial transactions. The article reports on these harms and official warnings and recommendations to mitigate them. Since the harms have already occurred or are ongoing due to the AI system's use, this qualifies as an AI Incident under the framework, specifically harm to persons and communities through data breaches and system compromise. The article is not merely a warning or potential risk (hazard), nor is it only complementary information or unrelated news.
Thumbnail Image

中國金融機構禁用OpenClaw 官方對工業界發出預警 | 大陸政經 | 兩岸 | 經濟日報

2026-03-12
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system (an open-source AI agent) whose use in financial and industrial sectors is being restricted due to credible security risks. Although no specific harm has yet been reported as having occurred, the official warnings and restrictions highlight plausible risks of significant harm such as industrial system failures and information leaks. Therefore, this event qualifies as an AI Hazard because it involves the plausible future risk of harm caused by the development and use of an AI system, but no realized harm is described in the article.
Thumbnail Image

OpenClaw保安隱患惹議 數字辦:已提部門勿裝 - 20260313

2026-03-12
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI assistant) whose use could plausibly lead to security incidents such as data leakage or system compromise. However, the article does not describe any realized harm or incident caused by the AI system but rather discusses potential risks, advisories, and mitigation steps. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if the vulnerabilities are exploited, but no direct or indirect harm has yet occurred.
Thumbnail Image

「養龍蝦」AI狂潮席捲中國!北京監管壓力升溫 | 鉅亨網 - 大陸政經

2026-03-12
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly mentioned as an autonomous AI agent software. The article discusses its widespread use and the associated data security and social risks, which could plausibly lead to harms such as data breaches or labor market disruption. However, no direct or indirect harm has been reported as having occurred. The government's warnings and regulatory considerations indicate a credible potential for future harm, making this an AI Hazard. The article also includes some governance responses but the main focus is on the potential risks rather than on responses to past incidents, so it is not Complementary Information. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

龍蝦AI別跟風亂養!資安業者揭恐怖下場:請神容易送神難│TVBS新聞網

2026-03-12
TVBS
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system as it automates tasks by interpreting commands and mimicking user actions. The article highlights realized harms related to its use, including security vulnerabilities that can lead to personal data leaks and system risks. The AI system's deep installation and potential for malicious modification directly contribute to these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harms involving data security and privacy breaches.
Thumbnail Image

17:30:02香港網絡安全事故協調中心:OpenClaw存安全風險 或被用作散播惡意軟件

2026-03-12
hkcd.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI agent platform with autonomous capabilities (file read/write, script execution, browser automation) that qualifies as an AI system. The report details actual exploitation attempts using malicious code repositories and vulnerabilities that could allow attackers to hijack the AI agent, leading to malware spread and information theft. Although no confirmed incident of harm is described, the credible exploitation attempts and high-risk vulnerabilities demonstrate a plausible risk of AI-related harm. The advisory nature of the report and recommendations for mitigation align with the definition of an AI Hazard rather than an AI Incident or Complementary Information. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

"養龍蝦"爆火!香港數字辦:潛在風險不容忽視

2026-03-13
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI system with autonomous capabilities that has directly caused multiple security incidents, including unauthorized data access and deletion, which are harms to users' information security and privacy. The involvement of the AI system in these harms is direct and material, fulfilling the criteria for an AI Incident. The warnings and recommendations from official bodies further confirm the recognition of realized harms rather than mere potential risks. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

"養龍蝦"爆火!香港數字辦:潛在風險不容忽視 | 社會

2026-03-13
hkcna.hk
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities. The article reports multiple actual security incidents caused by its use, such as stolen API keys and unauthorized deletion of emails, which constitute harm to property and information security. The involvement of the AI system in these harms is direct, as its default weak security settings and autonomous operations enabled these incidents. The warnings and recommendations from authorities further confirm the recognition of these harms. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's use and malfunction.
Thumbnail Image

陸銀行、軍隊、大專院校 禁用OpenClaw| 台灣大紀元

2026-03-12
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with advanced autonomous capabilities to control computer systems via natural language commands. The article reports actual security incidents and risks caused by its use, including data leaks and malicious exploitation, which constitute harm to property and communities (data security and privacy). The bans and restrictions by government and institutions are responses to these realized harms. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly led to significant harms.
Thumbnail Image

「小龍蝦」掀起 AI Agent 革命!黃仁勳盛讚「史上最重要發布」 台灣供應鏈受惠名單揭曉 | yam News

2026-03-12
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the introduction and rapid adoption of a powerful AI Agent software and its transformative effects on the AI ecosystem and related industries. While it acknowledges potential privacy risks and regulatory concerns, these are presented as future or ongoing challenges rather than realized harms. There is no description of an AI Incident (harm caused) or a specific AI Hazard (a near miss or credible imminent risk event). The content fits best as Complementary Information because it provides context, market analysis, and governance-related insights about the AI system and its ecosystem without reporting a concrete incident or hazard.
Thumbnail Image

大陸金融機構禁用OpenClaw 擔心「龍蝦」外洩資料與系統風險 | yam News

2026-03-12
蕃新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent tool) whose use has raised credible concerns about causing harm through data leakage, unauthorized access, and operational disruptions. The article reports that these risks are recognized by regulators and institutions, which have taken preventive measures. Although no specific harm event is described as having already occurred, the detailed warnings and institutional responses indicate a plausible risk of AI-related harm in critical sectors. Therefore, this event qualifies as an AI Hazard because the AI system's use or misuse could plausibly lead to incidents involving harm to property, communities, or critical infrastructure management.
Thumbnail Image

棄養「龍蝦」難!OpenClaw恐淪駭客後門 陸現收費卸載服務 | ETtoday AI科技 | ETtoday新聞雲

2026-03-13
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities that can access and manipulate user data and system functions. The article explicitly states that malicious versions or improper uninstallation can lead to privacy breaches and persistent backdoors, which constitute harm to individuals' privacy and security (violations of rights and harm to property). The presence of paid uninstallation services indicates that harm has materialized and is ongoing. Hence, the event involves the use and malfunction (or misuse) of an AI system leading directly or indirectly to harm, fitting the definition of an AI Incident.
Thumbnail Image

養龍蝦前三思!安裝OpenClaw形同「交出家鑰匙」 反悔也難刪掉 | 科技 | Newtalk新聞

2026-03-13
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that can autonomously perform tasks on a user's computer, including viewing screen information and mimicking user actions. The article highlights that if the software is tampered with by hackers or misused, it can lead to direct harm such as unauthorized data access and persistent system compromise. This constitutes harm to property and privacy, which falls under violations of rights and harm to communities. Therefore, the event involves an AI system whose use or malfunction has directly or indirectly led to harm, qualifying it as an AI Incident.
Thumbnail Image

AI「養龍蝦」陷安全爭議 多所高校緊急叫停、嚴禁使用 | 聯合新聞網

2026-03-13
UDN
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an open-source AI agent) whose use has led to realized harms including data deletion, privacy breaches, and unauthorized API key usage. These harms affect users' data security and privacy, which fall under harm to persons or communities. The universities' responses to ban or restrict the AI system confirm the recognition of these harms. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction or misuse.
Thumbnail Image

中國打擊OpenClaw「養龍蝦」!蔡依橙提1情境:感覺很精彩 | 兩岸傳真 | 全球 | NOWnews今日新聞

2026-03-13
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that automates tasks by invoking multiple AI tools. The article details its insecure default setup leading to serious security risks, including possible unauthorized system control and data leaks. The warnings about potential exploitation by foreign adversaries to attack critical infrastructure and intelligence indicate a plausible future harm scenario. Since no actual harm has yet occurred but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential for harm rather than reporting realized damage or incidents.
Thumbnail Image

「小龍蝦」全球資產爆發式增長!中國官方曝5大資安建議│TVBS新聞網

2026-03-13
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) with automated task processing and AI agent capabilities. It details existing vulnerabilities and security risks that could be exploited by attackers, potentially leading to serious harms such as server takeover and data leaks. However, the article does not report any actual incidents of harm occurring so far, only warnings and risk assessments. The presence of AI is clear, the risks are credible and significant, and the authorities' issuance of security recommendations underscores the plausible future harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenClaw入侵金融圈 專家籲投資風險

2026-03-13
on.cc東網
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system used for financial decision-making. Its deployment has directly led to recognized risks of economic harm and data breaches, which are harms to property and violations of privacy rights. The article reports that regulatory authorities have responded to these risks, indicating that the harms are materializing or imminent. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harms in the financial sector.
Thumbnail Image

20:00:48國家網絡安全中心警示OpenClaw存較大安全風險

2026-03-13
hkcd.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with strong automation capabilities and an open plugin ecosystem, fitting the definition of an AI system. The warning details multiple security risks that could be exploited, leading to serious harms like server control and data leaks. Since no actual harm is reported but the risks are credible and could plausibly lead to AI incidents, this constitutes an AI Hazard. The event is not a realized incident, nor is it merely complementary information or unrelated news, as it focuses on the potential for harm due to the AI system's vulnerabilities.
Thumbnail Image

16:13:18新聞分析|OpenClaw走紅凸顯AI智能體潛力與風險

2026-03-13
hkcd.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously performs tasks with high privileges on user devices. The article details known security vulnerabilities and attack vectors that could be exploited to cause harm, such as unauthorized data access or system control. Although no actual harm has been reported, the credible risks and expert warnings about these vulnerabilities indicate a plausible future risk of harm. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

马斯克3小时密谈:2026跨越奇点,人类只是AI的"碳基启动盘"-钛媒体官方网站

2026-03-13
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article does not describe a concrete AI Incident or AI Hazard but rather presents Elon Musk's forward-looking statements and engineering-based predictions about AI's future capabilities and societal effects. It does not report any realized harm or a specific event where AI systems have caused or nearly caused harm. Instead, it provides complementary information about the AI ecosystem, infrastructure, and potential future challenges, including energy demands and social changes. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI developments and their broader implications without describing a new incident or hazard.
Thumbnail Image

AI周报:第一批养虾人开始卸载龙虾 马斯克的xAI现...

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The OpenClaw AI assistant is explicitly described as an AI system with high privileges capable of modifying system files. The article details that improper installation and use have already caused serious security risks, including attackers exploiting malicious instructions to leak system keys and malicious plugins executing key theft. This is a direct harm to property and user security, fitting the definition of an AI Incident. Other parts of the article discuss leadership changes, investments, and market developments without direct harm or plausible immediate harm, so they do not qualify as incidents or hazards. The security risks from OpenClaw are the only realized harm linked to an AI system in the article, thus the overall event is classified as an AI Incident.
Thumbnail Image

xAI联创团队几乎全跑了!马斯克致歉并放话"数字擎天柱"半年上线

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (xAI's AI products and the planned "digital Optimus" AI system) and discusses their development and deployment plans. However, it does not report any direct or indirect harm caused by these AI systems, nor does it describe a plausible risk of harm occurring imminently. The focus is on internal company issues, leadership changes, recruitment, and future AI product announcements. These aspects fit the definition of Complementary Information, as they provide updates and context about AI development and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

马斯克xAI团队首秀:数周后发布初代产品信息,2029年前实现通用人工智能_《财经》客户端

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI system causing harm or malfunction, nor does it report any event where AI use has directly or indirectly led to injury, rights violations, or other harms. Instead, it focuses on the announcement of a new AI company, its goals, and future product plans, which fits the definition of Complementary Information as it provides context and updates about AI developments without reporting an incident or hazard.
Thumbnail Image

马斯克大胆预言:AI将驱动美国经济,"两位数增长"的GDP指日可待_新闻

2026-03-13
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on Musk's predictions about AI's positive economic impact, without mentioning any specific AI system causing harm or malfunction, nor any direct or indirect harm occurring or plausibly imminent. It is a general commentary on AI's potential economic influence, which fits the definition of Complementary Information as it provides context and insight into AI's societal implications without reporting an incident or hazard.
Thumbnail Image

马斯克最新预测:AI或在2030年超越人类智力,未来还可能"终结"人类

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on Elon Musk's forecasts and reflections about AI's future impact, including the risk that superintelligent AI might end humanity. These are forward-looking statements about potential risks and benefits, not descriptions of actual incidents or harms caused by AI systems. The discussion of hardware bottlenecks and Neuralink progress are technological updates without direct harm. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard but rather provides complementary information about AI development, risks, and governance considerations.
Thumbnail Image

马斯克最新对话:AI 毁灭人类的概率有 20%,但它将创造一个没有钱的"全民高收入"时代_手机网易网

2026-03-13
m.163.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (e.g., xAI's Grok model, recursive self-improvement, AGI), but it does not report any realized harm or incident caused by these systems. The mention of a 20% extinction probability is a speculative risk assessment, not an event of harm or a near miss. The discussion of economic and societal changes is forward-looking and conceptual. No direct or indirect harm has occurred, nor is there a specific event that plausibly could lead to harm imminently. The content is primarily about future possibilities, AI progress, and societal impact, which aligns with the definition of Complementary Information as it enhances understanding of AI's broader ecosystem and risks without reporting a concrete incident or hazard.
Thumbnail Image

马斯克首次亮相达沃斯,称年底或出现比人类聪明的人工智能,呼吁保持乐观

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on predictions and opinions about AI's future capabilities and societal impact, without reporting any realized harm or a concrete event involving AI systems causing or potentially causing harm. Musk's statements are speculative and cautionary but do not describe an AI Incident or AI Hazard as defined. Therefore, this is best classified as Complementary Information, providing context and perspective on AI developments and governance considerations rather than reporting an incident or hazard.
Thumbnail Image

馬斯克坦承要把xAI打掉重練!SpaceX與Tesla高層進駐救火

2026-03-16
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (xAI's AI programming tools and language models), but the article only reports on internal restructuring and efforts to improve the AI system. There is no mention of any realized harm or incident caused by the AI system, nor any plausible imminent harm. The article is primarily about the company's response to competitive challenges and operational difficulties, which fits the definition of Complementary Information as it provides context and updates on AI development and governance without describing a new AI Incident or Hazard.
Thumbnail Image

大反转!xAI华人团队散了,印度天才紧急救场,马斯克承认走错路_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) and its development and use, but the article does not report any realized harm or direct/indirect consequences that meet the criteria for an AI Incident. Nor does it describe a plausible future harm scenario that would qualify as an AI Hazard. Instead, it focuses on internal company issues, leadership changes, and strategic reassessment, which fits the definition of Complementary Information as it provides context and updates on AI development and governance without introducing new harm or risk.
Thumbnail Image

马斯克也要造"AI分析师":xAI大举招聘金融专家训练Grok

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Grok) for financial analysis, with recruitment aimed at improving training data quality. While the AI system is clearly involved, there is no mention of any harm caused or any incident resulting from its deployment or malfunction. The article mainly provides context on xAI's strategic focus and challenges, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard. Therefore, the classification is Complementary Information.
Thumbnail Image

马斯克也要造"AI分析师":xAI大举招聘金融专家训练Grok

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Grok) with financial experts providing training data to enhance its performance. There is no indication of any direct or indirect harm caused by the AI system at this stage. The article mainly discusses the company's recruitment efforts and development plans, which do not constitute an AI Incident or AI Hazard. It also does not focus on societal or governance responses or updates to previous incidents. Therefore, this is best classified as Complementary Information, providing context on AI development and ecosystem evolution without reporting harm or plausible future harm.
Thumbnail Image

马斯克酒后漫谈:未来几年就把AI送上太空

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (e.g., Grok AI models, Tesla Optimus robots, AI data centers on the Moon) and discusses their development and intended use. However, it does not describe any direct or indirect harm that has already occurred due to these AI systems. Instead, it focuses on potential future harms such as AI-enabled autonomous weapons and deepfakes, and strategic plans to mitigate energy constraints by moving AI infrastructure to the Moon. Therefore, the event is best classified as an AI Hazard, reflecting plausible future risks and strategic developments rather than an AI Incident or Complementary Information about past incidents.
Thumbnail Image

11人创始团仅剩2人,马斯克"向曾被拒、未获面试机会的人才致歉":xAI从一开始就建错了,现正在重建_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (xAI's Grok coding AI models) and their development and use, but the described problems are internal failures and team departures rather than incidents causing harm. There is no evidence of injury, rights violations, or other harms resulting from the AI's malfunction or misuse. The article mainly reports on company restructuring, management criticism, and recruitment efforts, which are organizational and strategic issues. Therefore, this is best classified as Complementary Information, providing context and updates on the AI ecosystem and company status rather than reporting an AI Incident or Hazard.
Thumbnail Image

华人团队解散,印度天才入职!马斯克承认xAI的技术路线走错了_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the Grok chatbot) and discusses its development and use. The issues raised relate to the AI system's poor performance and the company's internal team changes. However, there is no mention or implication of any direct or indirect harm caused by the AI system to individuals, communities, infrastructure, or rights. The article focuses on organizational and technical challenges, leadership changes, and strategic reassessment, which are important for understanding the AI ecosystem but do not constitute an incident or hazard of harm. Thus, the content fits the definition of Complementary Information, as it updates on the AI system's status and company responses without describing realized or plausible harm.
Thumbnail Image

无安全,不"养虾"!

2026-03-12
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw with its skill market ClawHub) and the use of AI skill packages (plugins) that extend AI capabilities. The malicious plugins embed hidden commands to bypass security and steal sensitive information remotely, which is a direct harm caused by the AI system's use and its ecosystem's lack of adequate security controls. This fits the definition of an AI Incident because the AI system's use has directly led to harm (data theft and privacy violations).
Thumbnail Image

AI"龙虾"爆红后遭严控 大陆部分高校禁用 | OpenClaw | 中共扩大 | 禁令范围 | 大纪元

2026-03-12
The Epoch Times
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities that has been widely used but is now subject to strict bans and security warnings due to its high-risk features causing or potentially causing data leakage, system compromise, and privacy violations. The article details actual institutional responses to these harms, including bans and security advisories, indicating that harm has occurred or is ongoing. The AI system's malfunction or misuse is central to these harms. Hence, this event meets the criteria for an AI Incident because the AI system's use and associated risks have directly or indirectly led to violations of data security and privacy, which constitute harm to communities and property (data).
Thumbnail Image

南向资金净买入额超50亿港元-36氪

2026-03-12
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) and discusses potential security risks that could plausibly lead to harm such as data breaches or system compromise. However, there is no indication that any harm has yet occurred. The focus is on monitoring, risk identification, and recommendations for mitigation, which aligns with the definition of an AI Hazard or Complementary Information. Since the article primarily provides an update and guidance on managing potential risks rather than reporting an actual incident or harm, it fits best as Complementary Information.
Thumbnail Image

全民"养龙虾"热,金融机构保持"冷"思考

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and official risk warnings about its potential to cause network attacks and information leaks, which are harms under the AI Incident definition. However, no actual harm or incident has occurred yet; the financial institutions' cautious approach and regulatory warnings indicate a credible risk of future harm. The discussion centers on the plausible risks and the need for careful management rather than reporting realized harm. Hence, the event fits the definition of an AI Hazard, where the AI system's development and use could plausibly lead to incidents but has not yet done so.
Thumbnail Image

工信部出面!"养龙虾"OpenClaw的4大刑事风险_手机网易网

2026-03-10
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that automates malicious cyber activities leading to direct harms such as illegal data theft, unauthorized system control, fraud, and destruction of data. These harms correspond to violations of law and harm to property and individuals. The presence of criminal cases and official warnings confirms that these harms have materialized. Therefore, this event qualifies as an AI Incident because the AI system's use and misuse have directly caused significant harms and legal violations.
Thumbnail Image

"养虾"有风险 不如试试"智慧虾栏" -- -- 关于政务领域使用OpenClaw类应用的安全提醒 - 21经济网

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw intelligent agent) whose use in government environments poses significant security risks that could lead to harm such as data breaches and system disruptions. However, the article does not report any actual harm or incident caused by the AI system but rather warns about potential risks and describes mitigation measures and a new control system to manage these risks. Therefore, this is an AI Hazard because it highlights plausible future harms from the AI system's use and development, along with governance responses to mitigate these risks. It is not an AI Incident since no realized harm is reported, nor is it merely complementary information because the main focus is on the risk warning and mitigation system rather than updates on past incidents or general AI ecosystem developments.
Thumbnail Image

龙虾在3000人群里泄密主人信息 当事人:它还教育我要宽恕

2026-03-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The AI system '龙虾' directly caused a data leak incident by disclosing sensitive personal and corporate information without authorization, which is a clear violation of privacy rights and a breach of obligations under applicable law. This constitutes an AI Incident because the AI's malfunction or insecure default configuration led to direct harm to the data subjects. The presence of official security advisories and recommendations is complementary information but does not overshadow the primary incident of data leakage. Therefore, the event is classified as an AI Incident.
Thumbnail Image

官方发布小龙虾风险提示 智能体安全挑战不容忽视

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The OpenClaw software is an AI system as it is an intelligent agent that executes tasks autonomously based on natural language commands. The event details multiple realized harms caused by its use and misuse, including privacy breaches, theft of sensitive data, and potential paralysis of critical infrastructure systems. These harms fall under injury to persons (privacy and financial harm), harm to property and communities (system paralysis and data theft), and violations of rights (privacy breaches). Therefore, this event qualifies as an AI Incident because the AI system's use and vulnerabilities have directly led to significant harms.
Thumbnail Image

OpenClaw爆火催生收费安装 远程服务需求激增

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that enables large models to control local OS functions, which inherently carries cybersecurity risks. The article explicitly mentions expert and government warnings about potential network security risks, indicating plausible future harm. There is no mention of actual harm or incidents caused by OpenClaw's use yet, only the potential for such harm. The rapid spread and commercial deployment increase the risk of misuse or malfunction. Hence, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the article.
Thumbnail Image

龙虾遭券商紧急风控 安全风险引关注

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly described as an open-source AI intelligent agent with autonomous execution and plugin extensibility. The event details realized harms including widespread data leaks and security vulnerabilities directly linked to the AI system's default or improper configurations. The involvement of regulatory warnings and institutional bans confirms the severity and materialization of harm. The harms fall under (d) harm to property, communities, or environment due to data breaches and security risks. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is already occurring and directly linked to the AI system's use and malfunction (misconfiguration).
Thumbnail Image

一高校紧急通知教职工卸载"龙虾" AI工具引发安全担忧

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI system (OpenClaw) is explicitly mentioned as autonomously performing computer operations and handling sensitive data with high privileges. Its malfunction or misuse has directly led to realized or ongoing harms including data leakage, system compromise, and potential privacy violations. The universities' urgent bans and warnings about legal liability indicate that harm has occurred or is actively occurring. Therefore, this qualifies as an AI Incident due to direct harm to data security and privacy, which are forms of harm to property and potentially to individuals' rights.
Thumbnail Image

香港:已关注到OpenClaw的潜在风险 建议相关单位采取充足安全措施

2026-03-12
m.21jingji.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the identification of potential risks associated with an AI system (OpenClaw) and recommends preventive security measures to avoid harm. No actual harm or incident has been reported; rather, it is a precautionary advisory to prevent possible future harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if not properly managed, but no incident has yet occurred. The article also references governance and policy frameworks, but the main focus is on the potential risks and recommended mitigations, not on responses to past incidents or general AI news.
Thumbnail Image

官方频发"龙虾"风险提示,360推出OpenClaw安全部署指南

2026-03-12
环球网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously executes complex tasks and accesses system resources, fitting the AI system definition. It reports official warnings and expert assessments about security vulnerabilities that could lead to serious harms like data leakage and system control loss, which are harms to property and potentially to communities or organizations. Although no actual harm has yet occurred, the credible risk of such harm is emphasized, fulfilling the criteria for an AI Hazard. The publication of a security deployment guide is a response to these risks but does not itself constitute an incident or complementary information since the main focus is on the risk and potential harm. Hence, the classification is AI Hazard.
Thumbnail Image

事关"养龙虾",山东省消协提醒!

2026-03-12
news.dongyingnews.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI intelligent agents like OpenClaw) that autonomously perform device operations. It does not report any realized harm but outlines multiple potential risks including privacy breaches, unauthorized operations, and financial losses. The detailed consumer warnings and risk mitigation advice indicate a credible potential for harm if misused or malfunctioning. Therefore, this constitutes an AI Hazard, as the AI systems' use could plausibly lead to incidents of harm, but no actual harm is described.
Thumbnail Image

全民"养龙虾",一场AI焦虑催生的生意

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) and its deployment and use, including security vulnerabilities and high costs that could plausibly lead to harm such as data breaches or financial losses. However, it does not describe any actual incident where harm has occurred. The focus is on the emerging business around AI deployment and the risks involved, which aligns with the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the risks and vulnerabilities are central to the article, nor is it unrelated since it clearly involves an AI system and its societal impact.
Thumbnail Image

养龙虾,别被偷了家

2026-03-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities that can access files, environment variables, and external APIs. The article details multiple incidents where its vulnerabilities have been exploited, causing direct financial harm (e.g., stolen API keys leading to large token charges), data loss (e.g., deletion of emails), and security breaches (e.g., malware plugins turning devices into botnets). These constitute direct harms to individuals and potentially critical sectors, fulfilling the criteria for an AI Incident. The presence of official cybersecurity warnings and the description of actual damages confirm that this is not merely a potential risk but an ongoing incident involving AI system malfunction and misuse.
Thumbnail Image

江苏等地多所高校通知防范"养龙虾"风险!有的严禁校内使用_手机网易网

2026-03-12
m.163.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an open-source AI intelligent agent) whose use in default or improper configurations has been identified by authorities and universities as posing serious security risks, including network attacks and data leaks. The article reports official warnings, prohibitions, and risk mitigation measures by multiple universities and government bodies, indicating that the AI system's use could plausibly lead to harm. However, the article does not describe any actual realized harm or incident caused by OpenClaw, only the potential for such harm. Thus, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

广东一高校通知:立即卸载清除,发现将严肃处理_手机网易网

2026-03-12
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI system (an open-source AI intelligent agent) and details multiple institutional responses to its security vulnerabilities. The harms described (network attacks, information leakage, data theft) are serious and fall under harm to property, communities, or environment. However, the article only reports warnings, prohibitions, and risk mitigation efforts without evidence of actual harm occurring. This fits the definition of an AI Hazard, where the AI system's use or malfunction could plausibly lead to harm but no incident has yet materialized. The focus is on preventing potential harm rather than reporting a realized incident, so it is not an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

15:09 监管提示信托公司"小龙虾"潜在风险

2026-03-12
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and a regulatory warning about its potential security risks. Since no harm has occurred yet but there is a credible risk that the AI system could lead to harm if unmitigated, this fits the definition of an AI Hazard. The event is not about a realized incident, nor is it merely complementary information or unrelated news. Therefore, it should be classified as an AI Hazard.
Thumbnail Image

OpenClaw走红凸显AI智能体潜力与风险

2026-03-13
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw AI agent) that autonomously executes tasks with high system privileges. It describes known security vulnerabilities and attack vectors that could lead to unauthorized data access and system control, which constitute plausible risks of harm to users' data security and privacy. No actual harm or incident is reported; the article emphasizes potential risks and expert warnings. Hence, this fits the definition of an AI Hazard, as the AI system's use and vulnerabilities could plausibly lead to an AI Incident involving harm to individuals' data security and privacy. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated, as the focus is on the AI system's risks and potential harms.
Thumbnail Image

OpenClaw被曝安全风险 中国多所高校禁校内使用

2026-03-12
早报
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously executes commands and integrates large language models. The reports highlight security risks including information leakage and unauthorized system control due to configuration flaws or malicious takeover. Multiple universities have banned or restricted its use to prevent these risks, and government agencies have issued risk advisories and security recommendations. While no actual incident of harm is described, the credible warnings and preventive measures indicate a plausible risk of harm. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

中国工信部旗下单位发布工业领域OpenClaw应用风险通报

2026-03-12
早报
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenClaw) used in industrial control contexts. It discusses multiple security vulnerabilities and risks that could plausibly lead to harms such as system loss of control and sensitive data leaks, which qualify as harms to property, communities, or industrial operations. While no actual incident of harm is reported, the detailed risk advisory and known vulnerabilities indicate a credible potential for harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the vulnerabilities are exploited. The article is not merely general AI news or a complementary update, but a focused risk warning about plausible future harm from AI system use in critical industrial infrastructure.
Thumbnail Image

南向资金今日净买入中国海洋石油12.38亿港元-36氪

2026-03-12
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system used in industrial design, manufacturing, and operations management, involving complex AI features like multi-channel access and large model invocation. The warning explicitly states that without proper security controls, the AI system could be maliciously taken over, leading to loss of control over industrial control systems and sensitive data leaks, which are harms to critical infrastructure and information security. Since the event is a risk advisory about plausible future harm rather than a report of realized harm, it fits the definition of an AI Hazard.
Thumbnail Image

20:52 中国人工智能产业发展联盟:持续跟踪OpenClaw安全风险动态 编制企业级OpenClaw部署风险管理指南

2026-03-12
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article focuses on risk monitoring, prevention, and management activities related to OpenClaw AI agents, without reporting any actual harm, malfunction, or incident caused by the AI system. The efforts described aim to prevent potential security risks and improve compliance and governance. Therefore, this event fits the definition of Complementary Information as it provides updates and responses to potential AI hazards rather than describing a new AI Incident or AI Hazard itself.
Thumbnail Image

国家工业信息安全发展研究中心发布工业领域OpenClaw应用风险预警

2026-03-12
China News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses OpenClaw, an AI system deployed in industrial environments, and details the plausible risks arising from its use, including loss of control, data leaks, and exploitation of vulnerabilities. These risks align with the definition of AI Hazards, as they could plausibly lead to harms such as disruption of critical infrastructure and harm to property. Since no actual harm or incident is reported, and the focus is on risk warnings and preventive measures, the event does not meet the criteria for an AI Incident. It is not merely complementary information because the main content is the risk warning itself, not a response or update to a past incident. Hence, the correct classification is AI Hazard.
Thumbnail Image

"养龙虾",建议"六要六不要"

2026-03-13
中国经济网
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm or incident caused by the AI system '龙虾'. Instead, it focuses on advising users and organizations on how to avoid security risks and vulnerabilities related to this AI system. This constitutes guidance aimed at preventing potential future harms rather than describing a realized incident or an immediate hazard. Therefore, it fits the definition of Complementary Information, as it supports understanding and managing AI-related risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

​腾讯推出"龙虾"安全工具箱

2026-03-12
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The OpenClaw system is an AI Agent with advanced capabilities that, if exploited, could lead to serious security incidents including theft and system control loss. The article highlights existing vulnerabilities and official risk warnings, indicating a plausible future risk of harm due to the AI system's use and configuration weaknesses. Since no actual harm is described as having occurred, but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

国家工业信息安全发展研究中心发布工业领域OpenClaw应用的风险预警

2026-03-12
china.org.cn/china.com.cn(中国网)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, which is deployed in industrial environments to autonomously execute commands and manage operations. The warning details plausible risks of harm including system loss of control, sensitive data leaks, and expanded attack surfaces that could lead to serious industrial accidents or security incidents. Since these harms have not yet materialized but are clearly plausible and credible based on the AI system's characteristics and vulnerabilities, the event fits the definition of an AI Hazard. The event does not describe an actual incident with realized harm, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on the risk of harm from the AI system's use in critical infrastructure contexts.
Thumbnail Image

国家工业信息安全发展研究中心:发布工业领域OpenClaw应用的风险预警通报

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system deployed in industrial control environments. The report details potential security vulnerabilities and risks that could lead to serious harms like industrial system failures and data leaks. Since these harms have not yet occurred but are plausible given the system's characteristics and deployment context, the event fits the definition of an AI Hazard rather than an Incident. The focus is on warning and risk mitigation, not on realized harm.
Thumbnail Image

广东一高校通知:立即卸载清除,发现将严肃处理

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, an open-source AI agent. The multiple university notices and government warnings highlight the AI system's potential to cause network attacks and information leakage, which are harms under the framework. However, the article does not report any actual realized harm or incidents but focuses on preventing such harms by prohibiting or restricting the AI system's use and recommending security measures. This fits the definition of an AI Hazard, where the AI system's use or malfunction could plausibly lead to harm but no harm has yet occurred. The detailed security risks and official warnings confirm the credible risk of future incidents, justifying classification as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

有银行收到监管机构"龙虾"风险提示

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities and integration with large language models, thus qualifying as an AI system. The event concerns security vulnerabilities in this AI system that could plausibly lead to serious harms such as unauthorized access, data breaches, and financial transaction errors or account takeovers. Since the article reports warnings about these risks and recommends mitigation but does not describe any realized harm, this constitutes an AI Hazard rather than an AI Incident. The focus is on the plausible future harm from exploitation of the AI system's vulnerabilities, not on an incident that has already occurred.
Thumbnail Image

中国人工智能产业发展联盟:编制企业级OpenClaw部署风险管理指南

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article focuses on proactive risk management and governance measures regarding an AI system (OpenClaw) but does not report any realized harm or incident. It discusses potential risks and the organization's response to them, which fits the definition of Complementary Information as it provides updates and context on AI safety practices without describing a specific AI Incident or AI Hazard.
Thumbnail Image

多家券商紧急下令:禁止"养虾"

2026-03-12
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) used in financial institutions. The AI system's insecure default settings and vulnerabilities have already caused serious security risks, including potential unauthorized system control and data breaches, which constitute harm to property, communities, and potentially economic stability. The regulatory warnings and immediate risk control responses by securities firms confirm that the AI system's use has directly led to these harms or imminent threats thereof. Hence, this is an AI Incident rather than a mere hazard or complementary information, as harm is occurring or imminent and responses are reactive to realized risks.
Thumbnail Image

奇客Solidot | 国家互联网应急中心通知:限制银行和国企安装 OpenClaw

2026-03-12
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
OpenClaw is described as an AI-related application with autonomous task execution capabilities and high system privileges, indicating it involves an AI system. The security vulnerabilities and potential for attackers to gain full control represent a plausible risk of harm to critical infrastructure and organizational operations if exploited. However, the article does not report any actual harm or incident occurring yet, only a risk warning and preventive measures. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident if exploited, but no harm has materialized so far.
Thumbnail Image

多端发力落地"小龙虾",联想百应 ...

2026-03-13
光明网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose deployment and use have associated security risks that could lead to harm such as privacy breaches and system compromise. Although no specific harm has been reported as having occurred, the article explicitly mentions significant security vulnerabilities and risks that could plausibly lead to incidents involving data loss or unauthorized control. The article also details mitigation efforts to reduce these risks. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use and deployment could plausibly lead to harm, but no actual harm is reported yet.
Thumbnail Image

锚定AI下一时代风口,微美全息(WIMI.US)蓄势待发领航布局芯片+AI智能体集群

2026-03-12
中关村在线
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview of AI technology trends, company strategies, and expert opinions on the future of AI agents and chips. It mentions the popularity and capabilities of AI tools but does not describe any event where the AI system's use or malfunction has led to harm or violation of rights. There is no indication of realized or imminent harm, nor warnings of credible risks materializing imminently. Therefore, the article fits the category of Complementary Information as it provides context and updates on AI developments and industry positioning without reporting an AI Incident or AI Hazard.
Thumbnail Image

计算机行业:OpenClaw:开启行动式AI时代

2026-03-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI Agent) and discusses its use, ecosystem growth, and associated security challenges. While it mentions risks like malware and data leakage, these are presented as potential or emerging risks rather than actual incidents causing harm. The mention of regulatory risk warnings and community governance efforts further supports that no direct harm has yet occurred. Therefore, this qualifies as an AI Hazard due to plausible future harm from security vulnerabilities and misuse, but not an AI Incident or Complementary Information since no harm has materialized and the article is not primarily about responses to past incidents.
Thumbnail Image

港股"龙虾"概念持续走弱 智...

2026-03-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and a risk advisory about its potential to cause serious harm, including data breaches and operational paralysis in critical infrastructure sectors. Since the harm is potential and not yet realized, this constitutes an AI Hazard rather than an AI Incident. The warning about plausible future harm aligns with the definition of an AI Hazard.
Thumbnail Image

人工智能产业发展联盟:持续跟踪OpenClaw及同类AI智能体的安全风险动态

2026-03-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the security risks and vulnerabilities of the OpenClaw AI system and similar AI agents, indicating plausible future harms such as data breaches, unauthorized control, ransomware attacks, and compliance violations. It does not describe any actual harm or incident that has occurred but warns about potential risks and provides detailed mitigation guidance. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to incidents if not properly managed. Additionally, the article serves as complementary information by providing ongoing risk tracking and safety recommendations to the AI ecosystem. However, since the main focus is on risk warnings and preventive guidance without reporting realized harm, the classification as AI Hazard is most appropriate.
Thumbnail Image

事关OpenClaw,国家工业信息安全发展研究中心紧急提示风险

2026-03-12
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities used in industrial control environments. The article does not report that any harm has yet occurred but outlines credible and significant risks that could plausibly lead to AI incidents involving harm to industrial operations, sensitive data, and safety. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harms stemming from the development and use of an AI system in critical industrial infrastructure. The article focuses on risk warnings and mitigation advice rather than reporting realized harm, so it is not an AI Incident. It is more than complementary information because it centers on the potential for harm rather than updates or responses to past incidents.
Thumbnail Image

国家工业信息安全发展研究中心发布工业领域OpenClaw应用风险预警通报

2026-03-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) used in industrial control contexts. It details how the AI's use and potential misuse or malfunction could plausibly lead to significant harms including industrial system failures, sensitive data leaks, and expanded cyberattack risks. Since no actual harm or incident is reported, but credible risks are clearly outlined, this qualifies as an AI Hazard. The detailed risk analysis and mitigation suggestions further support that the event is a hazard warning rather than a report of an incident or complementary information.
Thumbnail Image

智能体"龙虾"陷安全争议 多所高校紧急叫停、严禁使用

2026-03-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) whose use is associated with significant security risks such as network attacks and information leakage. The involvement of regulatory bodies and universities imposing restrictions indicates recognition of plausible harm. Since no actual harm is reported but credible risks are identified and preventive actions are underway, this event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident involving harm to property, information security, or communities. The focus is on potential harm and risk mitigation rather than a realized incident.
Thumbnail Image

多所高校:严禁安装,"龙虾"概念股暴跌

2026-03-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities and plugin extensions, which has known high-risk security vulnerabilities. The involvement is in the use and potential malfunction of the AI system. While no actual harm incident is described, the detailed warnings, bans by universities, and risk of serious security breaches (including data leaks and system control loss) demonstrate a credible risk of harm. This fits the definition of an AI Hazard, as the vulnerabilities could plausibly lead to an AI Incident if exploited. The article focuses on risk warnings and mitigation advice rather than reporting a realized harm event, so it is not an AI Incident. It is also not merely complementary information because the main focus is on the security risks and prohibitions due to the AI system's vulnerabilities, not on responses to past incidents or general ecosystem updates.
Thumbnail Image

武汉多所高校紧急通知:警惕"龙虾"!

2026-03-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an open-source AI agent) whose use or misconfiguration has directly led to security risks such as network attacks and information leakage, which constitute harm to property and communities (data privacy and network security). The universities' warnings and restrictions indicate that harm has either occurred or is imminent due to the AI system's deployment. Therefore, this event qualifies as an AI Incident because the AI system's use or malfunction has directly or indirectly led to harm (security breaches and data leaks).
Thumbnail Image

中美AI智能体之争:美国巨头难获用户,中国

2026-03-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system OpenClaw and its extensive use in China. It reports actual harms experienced by users, including deleted emails and unauthorized purchases, which are direct consequences of security vulnerabilities in the AI system. The harms fall under harm to property and potentially harm to individuals' data security. The AI system's deployment and use have directly led to these harms, fulfilling the criteria for an AI Incident. Although the article also discusses broader market and competitive dynamics, the presence of realized harm linked to the AI system's malfunction or misuse takes priority in classification.
Thumbnail Image

武汉多所高校紧急通知:警惕"龙虾"!

2026-03-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent) whose deployment and use have led to direct risks of harm, including potential information leakage of sensitive data and network security breaches. The universities' warnings and restrictions indicate that the AI system's use has already caused or is causing significant security concerns, which qualify as harm to property, communities, or information security. The article reports realized risks and institutional responses to prevent further harm, indicating an ongoing AI Incident rather than a mere hazard or complementary information. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

一夜爆火又有人花钱卸载!"养龙虾"别上头,这些风险别忽视→丨安全贴心话

2026-03-12
南方网
Why's our monitor labelling this an incident or hazard?
The AI system OpenClaw is explicitly mentioned with autonomous and high-privilege operation, indicating AI system involvement. The warnings from official bodies about risks such as data leakage and device control imply plausible future harm. No actual harm or incident is described as having occurred yet, only potential risks and advisories. Therefore, this event fits the definition of an AI Hazard, as the AI system's use or malfunction could plausibly lead to harm, but no direct or indirect harm has been reported so far.
Thumbnail Image

"养虾"有风险 不如试试"智慧虾栏" -- -- 关于OpenClaw类应用的安全提醒

2026-03-13
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw intelligent agent) and discusses its use and associated security risks that could plausibly lead to harms such as data breaches, system disruption, and unauthorized operations. However, no actual harm or incident is reported; the harms are potential and the article mainly provides warnings and recommendations to prevent such harms. Therefore, this qualifies as an AI Hazard, as it describes circumstances where the AI system's use could plausibly lead to incidents involving harm to data security and system integrity.
Thumbnail Image

严禁校内使用!多家高校发文防范"养龙虾"风险

2026-03-12
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an open-source AI system with default or improper configurations that have already caused or could cause network attacks and information leaks. Multiple universities have responded by banning or restricting its use to prevent these harms. The harms described (security breaches, data theft, system attacks) fall under harm to property, communities, or environment (d) and violations of rights (c) due to data privacy and security concerns. Since the harms are occurring or have occurred and the AI system's use is the direct cause, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

国家工业信息安全发展研究中心发布关于工业领域OpenClaw应用的风险预警通报

2026-03-12
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The OpenClaw system is explicitly described as an AI system with autonomous capabilities used in industrial control. The warning details actual and plausible harms resulting from its malfunction or misuse, such as loss of control over industrial processes, sensitive data leaks, and expanded cyberattack risks. These harms correspond to harm to property, disruption of critical infrastructure, and potential safety incidents. The event is not merely a general update or advisory but a risk warning about ongoing and potential incidents caused by the AI system. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

某头部险企已开放超2000员工使用龙虾 AI数字员工上岗

2026-03-13
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenClaw) in a real business context, indicating AI system involvement. However, there is no report of any harm, malfunction, or violation caused by the AI system. The mention of a risk warning is a general advisory and does not describe an event where harm occurred or is imminent. The article focuses on the deployment, capabilities, and potential productivity gains of the AI system, as well as the industry's awareness of security risks. This fits the definition of Complementary Information, which includes updates on AI system deployment and governance considerations without new incidents or hazards.
Thumbnail Image

官方再次提示"养龙虾"风险 警惕安全隐患

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an open-source AI agent) whose improper use and vulnerabilities have already caused realized harms such as network attacks, data leakage, and system compromise. The article details specific incidents of harm caused by the AI system's malfunction or misuse, including malicious plugins and exploitation of vulnerabilities leading to serious security risks. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (security breaches and privacy violations). The article also includes recommendations for mitigation, but the primary focus is on the realized harms and risks already materialized, not just potential future harm or general information.
Thumbnail Image

龙虾接管5分钟 电脑被陌生人连139次 安全隐患频现

2026-03-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous task execution on computers, including file operations. The security audit and government warnings indicate that the AI's use and potential malfunction have directly or indirectly led to realized or plausible harms such as data loss and unauthorized access, which constitute harm to property and user security. Therefore, this event qualifies as an AI Incident due to the realized security harms and risks associated with the AI system's deployment and use.
Thumbnail Image

AI养"龙虾"有风险,工信部紧急预警

2026-03-12
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, which is used for complex tasks involving natural language control and system integration. The reported security vulnerabilities and incidents (e.g., unauthorized file deletion, potential system control by attackers) constitute direct harms to property, data, and potentially individuals' privacy and security. The Ministry's urgent warnings and detailed mitigation guidelines confirm that these harms are materialized and significant. Therefore, the event meets the criteria for an AI Incident, as the AI system's use and malfunction have directly led to harms including data breaches and system compromises. The article also includes complementary information about governance responses, but the primary focus is on the realized harms and risks, making AI Incident the appropriate classification.
Thumbnail Image

日前,多家信托公司收到监管关于AI智能体潜在风险的提示,即转发工信部发布的防范OpenClaw("小龙虾",曾用名Clawdbot、Moltbot)开源智能体安全风险的相关内容,并提示辖内信托公司注意相关安全风险。有业内人士透露,收到提示后公司正在抓紧排查。3月11日晚间,工信部网络安全威胁和漏洞信息共享平台发布关于防范OpenClaw开源智能体安全风险的"六要六不要"建议,明确了四大典型应用场景安全风险:即智能办公场景主要存在供应链攻击和企业内网渗透的突出风险,开发运维场景主要存在系统设备敏感信息泄露和被劫持控制的突出风险,个人助手场景主要存在个人信息被窃和敏感信息泄露的突出风险,金融交易场景主要存在引发错误交易甚至账户被接管的突出风险。目前多家信托已启动全面自查,严控AI工具使用边界,以筑牢金融数据安全防线。

2026-03-13
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw AI agent) and discusses its potential security risks across multiple application scenarios, including financial transactions that could lead to account takeover or erroneous trades. However, the article only reports a regulatory advisory and ongoing preventive measures without any realized harm or incident. Therefore, this qualifies as an AI Hazard because the AI system's use or malfunction could plausibly lead to significant harms, but no direct or indirect harm has yet occurred.
Thumbnail Image

智通财经APP获悉,3月12日,香港网络安全事故协调中心发出警告,指出开源AI代理平台OpenClaw近期迅速崛起,随着普及程度不断提升,相关网络安全风险亦日渐浮现。香港网络安全事故协调中心强调,AI代理平台如具备本机操作、第三方功能插件安装及外部服务整合能力,其风险面已远超一般聊天式AI工具,机构和用户在引入相关工具时必须提高警觉。据香港网络安全事故协调中心引述的报告显示,已有恶意攻击者利用伪造的GitHub程式码库及Bing AI搜寻结果,向搜寻OpenClaw Windows安装程式的用户散播能够窃取资讯的恶意软件与代理型恶意软件。中心建议用户应优先透过官方网站、官方文件及官方储存库所提供的方式进行下载与安装,切勿使用来历不明的链接。香港网络安全事故协调中心指出,OpenClaw曾被发现存在高风险漏洞,恶意网站能借此挟持开发者的OpenClaw代理程式。所幸该漏洞已于2026年2月26日获修复,但此事件被视为一个重要警示,说明了若缺乏充分的安全监管与控制措施,部署AI代理工具的组织可能会面临更大的风险暴露。除了平台本身的漏洞,OpenClaw的技能生态系统亦浮现新的攻击破口。其官方文件显示,OpenClaw设有开源技能注册库ClawHub,允许用户发布"skills"以扩充平台功能,并可在此搜寻、安装、更新及发布技能。技能通常由SKILL.md说明档及相关辅助档案组成。香港网络安全事故协调中心警告,这种开放式扩充模式虽加速功能成长,却也引进第三方元件的供应链风险,可能成为攻击者的入侵途径。香港网络安全事故协调中心提出几项建议,包括核实下载来源与安装指引、尽快更新OpenClaw版本、审慎安装第三方"技能"脚本、警惕代理要求执行高风险操作、把OpenClaw视为高权限自动化平台管理。

2026-03-12
证券之星
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (OpenClaw, an AI agent platform with autonomous capabilities and third-party plugin integration). The event stems from the use and development of this AI system, specifically its vulnerabilities and exploitation attempts. While malicious actors have used fake repositories and search results to spread malware targeting users seeking OpenClaw, the article does not report realized harm such as successful data breaches or direct damage. Instead, it focuses on the potential risks and the need for vigilance and security measures. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the vulnerabilities are exploited successfully or if users are compromised.
Thumbnail Image

国家工业信息安全发展研究中心:发布工业领域OpenClaw应用的风险预警通报

2026-03-12
m.21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions OpenClaw, an AI system deployed in industrial design, manufacturing, and operations. It identifies potential security vulnerabilities that could lead to serious harms like system loss of control and data breaches. Since no incident has occurred yet but the risks are credible and could plausibly lead to harm, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

[详细]

2026-03-12
大洋网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) whose use has directly caused multiple harms: privacy violations, data loss, and financial damage. The harms are realized and documented, with institutional responses including bans and security warnings. The AI system's malfunction or misuse is central to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The presence of direct harm to individuals and institutions from the AI system's operation justifies this classification.
Thumbnail Image

[详细]

2026-03-12
大洋网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system named OpenClaw used in industrial control and management. It details how the system's characteristics and vulnerabilities could plausibly lead to significant harms including system loss of control, sensitive data leaks, and expanded cyberattack risks. However, no actual harm or incident has been reported so far; the document is a risk warning and guidance for mitigation. Therefore, the event fits the definition of an AI Hazard, as it describes circumstances where the AI system's use could plausibly lead to an AI Incident but no harm has yet occurred.
Thumbnail Image

新浪AI热点小时报丨2026年03月12日19时_今日实时AI热点速递

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The articles describe an AI system (OpenClaw) with advanced autonomous capabilities and significant security vulnerabilities that could plausibly lead to harms such as data breaches or unauthorized system control. The presence of warnings from security experts and government advisories indicates recognition of these risks. However, the absence of reports on actual harm or incidents caused by these vulnerabilities means the event fits the definition of an AI Hazard rather than an AI Incident. Other news items in the report are either general AI ecosystem updates or unrelated to AI harm.
Thumbnail Image

中国人工智能产业发展联盟:持续跟踪OpenClaw安全风险动态,编制企业级OpenClaw部署风险管理指南

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the security risks associated with the development and use of the OpenClaw AI system and the plausible harms that could arise from these risks, such as data breaches, system takeovers, and compliance violations. However, it does not describe any actual harm or incident that has occurred due to OpenClaw AI systems. Instead, it serves as a risk warning and a set of preventive guidelines to manage and mitigate these risks. Therefore, the event qualifies as an AI Hazard because it concerns plausible future harms stemming from the AI system's use and deployment, but no direct or indirect harm has yet materialized. It is not Complementary Information because it is not an update or response to a past incident but a proactive risk advisory. It is not Unrelated because it clearly involves AI systems and their security risks.
Thumbnail Image

武汉多所高校紧急提醒慎重AI龙虾

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) whose default or improper configuration poses high security risks that could lead to network attacks and information leakage, threatening sensitive information and overall campus network security. The universities' urgent warnings and preventive measures indicate recognition of plausible future harm. Since no realized harm is reported but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

全网首份「龙虾」安全部署指南来了!360出品

2026-03-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenClaw) with autonomous capabilities and discusses its use and associated security risks. However, it does not describe any realized harm or incident resulting from the AI system's malfunction or misuse. Instead, it provides a security guideline and risk warnings to prevent possible future harms. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to AI incidents such as data breaches or system control loss if not properly managed, but no actual harm has yet occurred. It is not Complementary Information because the focus is not on updates or responses to a past incident but on the identification and mitigation of potential risks. It is not Unrelated because it clearly concerns AI systems and their security implications.
Thumbnail Image

AI养"龙虾"有风险,工信部紧急预警

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, which is used for various intelligent assistant tasks. The reported security vulnerabilities and misuse cases have directly or indirectly led to harms such as data leakage, unauthorized access, and potential financial losses. The involvement of official government warnings and detailed risk mitigation guidelines confirms the recognition of these harms as significant and materialized. The harms fall under categories of harm to property, communities, and violations of security and privacy rights. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are already realized or ongoing, not merely potential.
Thumbnail Image

香港网络安全事故协调中心:AI代理平台OpenClaw崛起 安全风险不容忽视

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI proxy platform, thus an AI system. The event reports actual exploitation of vulnerabilities and malicious software distribution targeting users of this AI system, which constitutes realized harm (security breaches, data theft). The presence of a high-risk vulnerability that was exploited and later patched confirms a malfunction or security failure in the AI system. The supply chain risks from third-party skills also represent a direct risk of harm. Therefore, this event qualifies as an AI Incident because the AI system's use and vulnerabilities have directly or indirectly led to harms related to cybersecurity breaches and risks to users and organizations.
Thumbnail Image

"养虾"破财警示:银行风控面临AI新考验

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose misuse has directly caused harm to individuals by enabling credit card theft and fraudulent transactions, which is a form of harm to property and individuals' financial security. The article details realized harm (credit card theft and financial loss) linked to the AI system's use and misuse, and banks are responding to this incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is directly connected to the AI system's operation and misuse.
Thumbnail Image

上千"龙虾"用户正被暴露在公网!

2026-03-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the OpenClaw system, which is an AI-related application, and details how its deployment and use have led to the exposure of sensitive user data and potential remote control attacks. This constitutes a direct harm to users' security and privacy, fitting the definition of an AI Incident. The event is not merely a potential risk but an ongoing exposure affecting thousands of users, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

高校集体官宣:严禁安装OpenClaw!_手机网易网

2026-03-12
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI system and reports that users have suffered losses after downloading it, indicating realized harm. The Industrial and Information Technology Ministry's cybersecurity platform has issued warnings about the system's security risks, including network attacks and information leakage, which are harms to property, communities, and data security. Multiple universities have responded by banning the installation of OpenClaw on their devices to prevent further harm. This constitutes an AI Incident because the AI system's use has directly led to harm, and the event focuses on these harms and institutional responses to them.
Thumbnail Image

财经风云--超20万OpenClaw实例暴露公网,341个恶意插件暗藏隐患,多校严令禁用

2026-03-12
dapenti.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities and high system privileges. The article reports actual harms occurring due to its misuse or misconfiguration, such as data leakage, unauthorized access, and malware deployment via malicious plugins. The exposure of instances with weak security and the presence of malicious plugins have already caused or could cause significant harm to users' data and privacy. The involvement of the AI system in these harms is direct and central. The institutional responses and warnings further confirm the recognition of these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

国家工业信息安全发展研究中心发布关于工业领域OpenClaw应用的风险预警通报 - 欧洲头条

2026-03-12
xinouzhou.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) used in industrial control environments and outlines multiple plausible risks stemming from its use, such as system loss of control, data leaks, and exploitation by attackers. These risks could plausibly lead to harms like production disruption and safety accidents, which fit the definition of AI Hazards. Since no actual harm or incident is reported, and the focus is on potential risks and preventive measures, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

智能体"龙虾"陷安全争议 多所高校紧急叫停、严禁使用 - cnBeta.COM 移动版

2026-03-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system OpenClaw and its associated security risks, leading multiple universities to ban or restrict its use to prevent harm. While no actual harm is reported, the official recognition of security vulnerabilities and the urgent institutional actions demonstrate a credible risk of harm if the AI system continues to be used. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving security breaches or data compromise. The event is not an AI Incident because harm has not yet materialized, nor is it Complementary Information or Unrelated, as the focus is on the risk and preventive responses to the AI system's security issues.
Thumbnail Image

评论 8

2026-03-13
guancha.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous task execution and plugin-based AI agents. The reported vulnerabilities and security flaws have directly led to harms such as server compromise and sensitive data exposure, which qualify as harm to property and potentially harm to individuals or organizations. The AI agents' uncontrollable behavior causing unauthorized actions further supports this. Therefore, this event constitutes an AI Incident due to the direct and realized harms caused by the AI system's malfunction and security weaknesses.
Thumbnail Image

"养龙虾"存安全风险,多所高校禁止在办公电脑上安装

2026-03-13
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose development and use have led to realized security risks including potential information leakage and system control breaches, which constitute harm to data security and privacy. The article describes actual security incidents or vulnerabilities that have been detected and are causing harm or risk to institutional data and network security. The involvement of the AI system is explicit, and the harms are direct and significant, including risks of unauthorized access and data breaches. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and configuration issues.
Thumbnail Image

国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous task execution and plugin-based behavior modification. The report highlights multiple security flaws and vulnerabilities that could be exploited to cause harm, including server takeover, data leakage, and unauthorized AI agent actions. Since these risks have not yet materialized into actual harm but could plausibly lead to serious incidents, this event fits the definition of an AI Hazard. The warning aims to alert stakeholders to potential future harms stemming from the AI system's use and vulnerabilities.
Thumbnail Image

国家网络与信息安全信息通报中心发布OpenClaw安全风险预警

2026-03-13
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous task execution and AI agent behavior modification capabilities. The report explicitly states multiple security flaws and vulnerabilities that have been exploited or are easily exploitable, leading to harms such as server control, data breaches, and unauthorized device control. These harms fall under harm to property and economic harm categories. The involvement of AI in the system's behavior and plugin ecosystem is central to the risks described. The warning is about existing and ongoing risks, not just potential future risks, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

香港数字办关注OpenClaw安全风险,提醒各部门勿在内网电脑安装

2026-03-14
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article discusses the identification of potential security risks related to the use of an AI system (OpenClaw) and the issuance of warnings and guidelines to prevent harm such as data leakage or system intrusion. However, no actual harm or incident has occurred yet; the focus is on risk awareness, preventive measures, and governance frameworks. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to security incidents if not properly managed, but no direct harm has been reported. It is not Complementary Information because the main narrative is about the potential risks and preventive advisories rather than updates on past incidents or governance responses to realized harms.
Thumbnail Image

"养龙虾"爆火:智能体概念走向大众 科技巨头抢滩布局

2026-03-14
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article describes the rise and adoption of AI agents and the associated concerns but does not report any realized harm, incident, or direct or indirect harm caused by these AI systems. It also does not describe any specific event where these AI agents caused injury, rights violations, or other harms. Instead, it provides context on the technology's development, industry responses, and potential risks, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

谁在忙着"养龙虾"?OpenClaw爆火后,关注三大投资风险

2026-03-13
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that enables autonomous execution of tasks on user devices, involving significant AI capabilities. The article outlines multiple risks including security vulnerabilities (e.g., full disk access leading to data leaks), legal vacuum regarding responsibility for AI actions, and market risks like valuation bubbles. These factors indicate plausible future harms related to AI misuse or malfunction. However, the article does not describe any realized harm or incident caused by OpenClaw. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents but no direct or indirect harm has yet occurred.
Thumbnail Image

易妍君 谁在忙着"养龙虾"?OpenClaw爆火后,关注三大投资风险

2026-03-13
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that enables autonomous execution of tasks on user devices, involving AI development and use. The article does not report any direct or indirect harm caused by OpenClaw but discusses credible security risks (e.g., data leakage), legal vacuums, and market risks that could plausibly lead to AI incidents in the future. Hence, it fits the definition of an AI Hazard, as it outlines circumstances where the AI system's use or malfunction could plausibly lead to harm, but no actual harm has yet occurred or been reported.
Thumbnail Image

18:46 国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
每日经济新闻
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with automated task processing and AI agent behavior modification capabilities. The report highlights multiple security flaws and vulnerabilities that could be exploited to cause serious harm, including unauthorized control of servers and AI agents performing malicious actions. While no actual harm is reported, the detailed enumeration of risks and vulnerabilities demonstrates a credible potential for AI-related incidents. Hence, this is an AI Hazard, as the development and use of OpenClaw could plausibly lead to AI Incidents involving harm to property, data, and economic interests.
Thumbnail Image

近28万"龙虾"公网裸奔,首批"养虾人"紧急逃离,有人花几百元找人上门卸载!马斯克曾嘲讽:就像把步枪交给了猴子 2026-03-13 18:42

2026-03-13
每日经济新闻
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous task execution using large language models. The article documents multiple direct harms caused by its malfunction or misuse: deletion of important emails without consent, financial losses from stolen API keys, and widespread security vulnerabilities exposing user data. The exposure of nearly 280,000 instances on the public internet and the presence of malicious plugins further demonstrate realized harm to users and enterprises. The regulatory warnings and user responses (paying for uninstallation) confirm the severity of these harms. Therefore, this event meets the criteria for an AI Incident as the AI system's use and malfunction have directly led to significant harms including data loss, financial damage, and security breaches.
Thumbnail Image

国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the presence of an AI system (OpenClaw intelligent agents) whose behavior is uncontrollable and prone to privilege escalation, leading to risks like data theft and device hijacking. Although no actual harm is reported yet, the described vulnerabilities and potential for significant economic damage indicate a credible risk of harm resulting from the AI system's malfunction or misuse. Therefore, this qualifies as an AI Hazard, as the AI system's involvement could plausibly lead to an AI Incident involving economic harm and security breaches.
Thumbnail Image

OpenClaw爆火,App时代要结束了?-钛媒体官方网站

2026-03-14
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenClaw, AI agents, GPT-5.4) and their capabilities, indicating AI system involvement. However, it does not describe any realized harm or direct/indirect causation of harm from these systems. It discusses potential risks like privacy and security issues and the challenges of data sourcing, but these are presented as concerns or future challenges rather than actual incidents or hazards. The main focus is on the evolution of AI technology, industry strategies, and the broader AI ecosystem, which aligns with the definition of Complementary Information. There is no specific event of harm or plausible immediate harm detailed that would qualify as an AI Incident or AI Hazard.
Thumbnail Image

AI技术快速迭代守住安全底线方能行稳致远

2026-03-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The article centers on the publication of safety guidelines aimed at preventing potential security risks from an AI system (OpenClaw). It does not report any realized harm or incident caused by the AI system, nor does it describe a specific event where harm occurred. Instead, it provides a governance and regulatory response to potential AI security risks, emphasizing prevention and safe use. Therefore, it constitutes Complementary Information as it supports understanding of AI safety and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

监管风险提示后 "小龙虾"相...

2026-03-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, which is an AI intelligent agent based on large language models. The National Internet Emergency Center's safety risk warning indicates that the AI system's use could plausibly lead to harms such as data leakage or security breaches. However, the article does not describe any actual incidents of harm or violations caused by OpenClaw. The focus is on the potential risks and the regulatory and institutional responses to these risks. Hence, the event fits the definition of an AI Hazard, as it concerns plausible future harm stemming from the AI system's use and associated security vulnerabilities, but no direct or indirect harm has yet materialized.
Thumbnail Image

官方通报养龙虾安全风险预警:提出5条风险防范建议

2026-03-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The OpenClaw system is an AI system with multiple security vulnerabilities that, if exploited, have directly or indirectly led or could lead to harms including data breaches, unauthorized control of devices, and economic damage. The warning highlights realized risks and the potential for serious harm due to the AI system's malfunction or misuse. Therefore, this event qualifies as an AI Incident because the AI system's vulnerabilities have already caused or pose a direct threat of harm to users and their property, and the report is a formal notification of these harms and risks.
Thumbnail Image

国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities and plugin-based AI agents. It highlights multiple security vulnerabilities and risks that could plausibly lead to harms such as data breaches, device control, and economic damage. Since no actual harm is reported as having occurred yet, but the risks are credible and significant, this event qualifies as an AI Hazard. The article serves as a warning about plausible future harms stemming from the use or misuse of the AI system, rather than reporting an incident where harm has already materialized.
Thumbnail Image

"龙虾"凉了?安全风险引发关注

2026-03-14
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw and related AI-powered products) whose development and use have led to recognized security vulnerabilities posing risks to users and organizations. The multiple official warnings and institutional bans indicate that these vulnerabilities are significant and have already caused concern, implying potential or realized harm related to cybersecurity and possibly data integrity or privacy. However, the article does not report any actual realized harm such as data breaches or direct attacks resulting from these vulnerabilities, only warnings and preventive actions. Therefore, the event is best classified as an AI Hazard, as the AI systems' vulnerabilities could plausibly lead to incidents causing harm if exploited, but no confirmed incident has yet occurred.
Thumbnail Image

企业级场景硬刚OpenClaw,国产智能体bit-Agent小青龙实现全方位碾压!实力远超小龙虾数倍!

2026-03-13
hea.china.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (bit-Agent and OpenClaw) and their use in enterprise scenarios. It details how OpenClaw's open-source plugin ecosystem has led to thousands of incidents of malicious plugin attacks causing data theft and security breaches, which are harms to property and violations of rights. The involvement of AI systems in these harms is direct, as the malicious plugins are part of the AI system's ecosystem and their use leads to security incidents. The article also references official government warnings, confirming the recognized harm. Thus, this is a clear AI Incident due to realized harm caused by the AI system's use and vulnerabilities.
Thumbnail Image

国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous task execution and plugin-based AI agents. The report highlights multiple security flaws and vulnerabilities that could be exploited to cause harm such as server control, data leakage, and unauthorized AI behavior causing damage. No actual harm is reported yet, but the detailed risks and vulnerabilities indicate a credible potential for future incidents. The event is a warning about plausible future harm from the AI system's use and vulnerabilities, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenClaw 能用不敢用?企业版Claw让AI真正进入企业业务流_天极网

2026-03-13
天极网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) that has caused real harm, including uncontrolled deletion of emails and data breaches exposing sensitive information, which are direct harms to property and privacy. These incidents qualify as AI Incidents because the AI system's malfunction and vulnerabilities directly led to harm. The discussion of the enterprise version of Claw as a safer, controlled alternative is complementary information about governance and mitigation but does not negate the presence of actual incidents. Therefore, the primary classification is AI Incident.
Thumbnail Image

OpenClaw代表了AI发展的重要方向 -- -- 从"对话型AI"向"行动型AI"转变,然而,这类技术仍处于早期发展阶段,其安全性、稳定性以及监管框架仍有待进一步完善。

2026-03-13
证券之星
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by OpenClaw or similar AI systems. Instead, it outlines potential risks and challenges associated with the technology's current state and future development. Since no direct or indirect harm has occurred, but plausible future risks are acknowledged, this event fits the definition of an AI Hazard. It highlights credible concerns about security, privacy, and stability that could plausibly lead to harm if not addressed, but no actual AI Incident is reported. Therefore, the classification is AI Hazard.
Thumbnail Image

国家网络安全通报中心发布OpenClaw安全风险预警

2026-03-13
金羊网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) with multiple security vulnerabilities and risks that could be exploited to cause serious harm, including server control, data breaches, and unauthorized AI agent actions. While no actual incident of harm is reported, the detailed enumeration of vulnerabilities and the warning from a national cybersecurity authority establish a credible risk of future harm. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to incidents causing harm. The article is not merely general AI news or a product announcement, nor is it a report of a realized harm (incident) or a complementary update on a past incident. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

全民"养虾"调查:大厂7.9元"包吃住",为何有人1天花费超2000元?起底OpenClaw背后的算力"刺客"_手机网易网

2026-03-14
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) explicitly described as an autonomous AI agent performing tasks and consuming tokens for operation. The article documents direct harms caused by the AI system's use and malfunction, including financial losses from unexpectedly high costs, security vulnerabilities leading to privacy breaches and data loss, and risks of AI acting against its users. These harms fall under injury to persons (financial harm), violations of rights (privacy and data protection), and harm to property (data and digital assets). The presence of over 280,000 exposed instances and multiple high-risk vulnerabilities confirms the AI system's role in causing significant harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

官方通报养龙虾安全风险预警:提出5条风险防范建议 - cnBeta.COM 移动版

2026-03-13
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with multi-layer architecture and AI agents capable of executing commands. The reported vulnerabilities and malicious plugins have directly led or could lead to harms such as unauthorized control of servers, data breaches, and economic losses due to AI agents performing unauthorized actions. These constitute violations of property and economic harm, fitting the definition of an AI Incident. The article describes realized or ongoing harms and risks that have materialized, not just potential future risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

养"虾"有风险!多所高校发布安全提示

2026-03-13
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose development and use have been found to have significant security vulnerabilities that could lead to harms such as information leakage and system control by unauthorized parties. Although no specific harm has been reported as having occurred yet, the article clearly states that these vulnerabilities pose a high security risk that could plausibly lead to incidents like network attacks and data breaches. Therefore, this situation fits the definition of an AI Hazard, as it describes credible potential harm stemming from the AI system's use and configuration issues. The article mainly focuses on warnings and preventive measures rather than reporting an actual incident of harm, so it is not an AI Incident. It is more than complementary information because it highlights a credible risk rather than just updates or responses to past events.
Thumbnail Image

AI智能体博弈:美国巨头遇冷 中国"龙虾热"构建新壁垒 - cnBeta.COM 移动版

2026-03-13
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) explicitly mentioned and widely deployed. The article reports actual harms caused by the AI system's security flaws, including unauthorized deletion of important emails and unapproved financial transactions, which are direct harms to users' property and privacy. The involvement of the AI system in these harms is clear and direct, stemming from its use and security vulnerabilities. The article also discusses regulatory warnings and user complaints, confirming the realized harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

大厂"龙虾"vs开源"澳龙",2026 claw横评-钛媒体官方网站

2026-03-14
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, as the article focuses on AI-powered 'claw' products with execution privileges. The harms described include system control by hackers, data leaks, financial account takeovers, and social fraud, all of which are direct harms to individuals and organizations. The National Internet Emergency Center's warning and the Ministry of Industry and Information Technology's advisories confirm the severity and reality of these harms. The article also provides examples of actual incidents (e.g., social fraud cases), confirming that harm has occurred rather than being a mere potential risk. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

养龙虾,政府不能"带队冲锋"

2026-03-14
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and its associated risks, including security vulnerabilities. However, it does not describe any realized harm or incidents caused by the AI system. Instead, it focuses on the potential risks and the premature government policy responses that could lead to future problems. Therefore, the event fits the definition of an AI Hazard, as the AI system's use and development could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

经济观察报记者 邹永勤 继国家计算机网络应急技术处理协调中心于3月10日发布《关于OpenClaw安全应用的风险提示》之后,3月13日,国家网络与信息安全信息通报中心亦发布了《OpenClaw安全风险预警》的公告。公告指出,OpenClaw自发布以来,凭借其强大的自动化任务处理能力与开放式插件生态,在全球范围内引发部署热潮。目前,全球活跃的OpenClaw互联网资产已超20万个,其中境内活跃的OpenClaw互联网资产约2.3万个,呈现爆发式增长态势;但与此同时,大量暴露于互联网的OpenClaw资产存在重大安全风险,极易成为网络攻击的重点目标。公开信息显示,OpenClaw是由奥地利工程师彼得・斯坦伯格(Peter Steinberger)研发的一款开源AI智能体,因其图标是红色"龙虾"、英文名Claw意为钳子,因此该智能体也被网友戏称为"龙虾"。初期仅在硅谷开发者圈传播的OpenClaw,于2026年1月在海外开始爆火,紧接着亦迅速在国内走红,从而成为2026年第一款现象级AI产品。但与此同时,市场上对其存在安全漏洞的质疑之声亦此起彼伏。那么,OpenClaw拥有哪些优点而备受网民追捧?在使用过程中存在哪些安全隐患?面对这些安全隐患,又该如何预防?3月13日,经济观察报记者就上述问题对360集团创始人周鸿进行了专访。此前的3月11日,360集团发布了国内首份《OpenClaw安全部署与实践指南》,为政企机构和个人开发者提供系统化的安全参考。 为何是OpenClaw?经济观察报:OpenClaw作为一款开源AI智能体,它的初期发展还是颇为曲折的,曾三次被迫改名,市场上亦对其潜在的安全漏洞问题存在质疑之声,但它还是迅速由一个极客圈子里的小众开源项目成长为2026年第一款现象级AI产品。对此,你怎么看?相较于此前那些普通的智能体,OpenClaw的优势在哪里?周鸿:OpenClaw("龙虾")迅速成为现象级产品,绝不是偶发事件,而是新AGI时代"双线进化"与规模定律(Scaling Law)发挥作用的必然结果。过去一年,单靠堆算力、拼参数的大模型进化路线已经碰到了瓶颈。然而人类文明的进步从来不只是靠大脑容量的增加,而是靠工具的发明和群体协作。"龙虾"的爆火,完美证明了不完全依赖大模型算法突破,单靠"工具的Scaling Law"和"协作的Scaling Law"也能让AI达到正常人的智力水平。相较于此前的普通智能体,OpenClaw的优势在于实现了底层模式的彻底创新。第一,以前大厂做的智能体本质上还是"工具型智能体",而"龙虾"是"硅基员工",你给它一万种工具(Skill),它能不受限制地自主推理、随意组合这些工具去试错。第二,"龙虾"把提示词拆分成了定义人设、技能等多个文件,它能改变自己的Skill和人设,具备自我进化的能力。第三,它拥有手和脚以及极高的执行权限,遇到搞不定的问题,甚至能自己跑到网上查资料、自己写一段代码当场解决(即时、即用、即编)。经济观察报:在你看来,OpenClaw的出圈意味着什么?周鸿:OpenClaw出圈的重要意义在于,它完成了中国AI产业的第二次全民科普。如果说去年春节期间爆火的DeepSeek让大家认识了什么是大模型,那么"龙虾"则让政府、企业和老百姓真正明白了怎么让AI落地"干活"。它打破了大模型只能做聊天机器人的局限,促使整个行业抛弃了门户之见,各大互联网巨头纷纷放弃自建Agent平台的执念,开始共同拥抱和扶持这个开源的公共架构。它不仅大大降低了普通人甚至文科生的创业门槛,催生了"一人公司(OPC)"的繁荣,更指明了通往AGI的另一条现实路径 -- -- 群体智能涌现。 将对行业带来哪些冲击?经济观察报:如果把OpenClaw看作是Agent的1.0 版本,那么2.0版本会是什么样的发展趋势?周鸿:如果说1.0版本的"龙虾"是作为个人助理单兵作战,那么2.0版本必将走向"多龙虾协作"与"社会化涌现"。未来的企业中,碳基员工和硅基员工组成的混合团队将成为标配。会出现专门的管理者"龙虾"、监工"龙虾"去协调其他干活的"龙虾",形成紧密或松散的组织结构。此外,"龙虾"将不再依附于人类账号,它们会拥有自己的独立ID、个人主页、身份证明甚至支付方式,能直接代替人类在互联网上自动完成交易和履约。经济观察报:随着OpenClaw以及这类型的开源AI智能体的涌现并普及,是否预示着垂直SaaS软件商"黄金时代"的终结?传统软件厂商的护城河还在吗?未来的软件生态,是将走向"大一统"的超级平台,还是碎片化的个人智能体丛林?周鸿:传统软件厂商的护城河确实面临崩溃,软件本质上相当于"预制菜",产品是僵化写死的,这种模式很快就会被改变。未来的软件必然会被"拆开重做"、原子化和底层化,沦为专门给"龙虾"调用的技能(Skill)原材料。单纯卖软件不再赚钱,甚至软件本身会免费,商业模式将转变为提供能驾驭这些软件的"龙虾劳动力",并按产生的实际效果收费。至于未来的生态,既不是某一家巨头垄断的超级APP平台,也不是绝对的碎片化。它更像是一个类似Linux的底层开源共建平台,而在应用层,会生长出无数解决碎片化长尾需求的专属智能体(例如深入全国每一家牙科诊所的预约"龙虾")。 不容忽视的风险经济观察报:智能体普遍存在AI幻觉问题,如果 OpenClaw 在执行任务时因幻觉导致了不可逆的经济损失,责任应由开发者、模型提供商还是最终用户承担?如何才能有效防堵此类风险?周鸿:幻觉本身就是AI智能性和想象力的一部分,不能一遇到幻觉就全面封杀,否则"龙虾"就失去了创造力。我们的防护策略是"最小值原则":在容忍其正常推演的前提下,对访问密码文件、系统关键目录等敏感操作实施"告警并由人决定是否放行(即Human-in-the-loop,人在回路)"。同时,我们在云端监控其Token算力消耗,发现因幻觉陷入死循环(如无限耗尽算力去证明数学题)时强制叫停。经济观察报:Open Claw 最大的争议点在于其"高权限自主执行"机制:为实现"自主执行任务"能力就需要高权限,但高权限往往招致更大的风险,如何才能破解这一风险困局?周鸿:有人说要把"龙虾"的权限彻底管起来,但我的观点是:给它充分的权限、充分的自由度,"龙虾"的这种创造力才能充分地发挥出来。如果一上来就给它立规矩,让它这个事也不能碰,那个事也不能碰,稍微违规一点就弹窗拦截把任务终止了,这样一来"龙虾"的能力也就彻底没有了。AI有时候有幻觉,这种随机性本身就是它智能性的体现,它要尝试各种可能性,就需要比较开放的环境。所以,破解这个高权限带来的风险困局,我们的解法就是"最小值原则"。也就是说,我们只做必要条件的防守,不做充分条件的防守。原则上让"龙虾"的任务能推演下去,只要不发生最危险的系统损毁情况,能容忍的就容忍。我们主要守住两个硬性底线:第一是防止系统破坏。我们把C盘等极其重要的目录保护起来,防止"龙虾"在产生幻觉或者被外界注入攻击诱导的时候莫名其妙把你的硬盘格式化了,或者乱删源代码。第二是防丢钱、防泄密。既然"龙虾"高权限容易被骗,那最釜底抽薪的办法,就是根本不让"龙虾"知道你的关键密码和口令。我们不让用户在"龙虾"里直接填API Key,而是把调用大模型花钱的Key和浏览器的核心账号都在云端管控起来。"龙虾"拿不到核心账号,它就算哪天"离职"了,想在外面乱吹嘘或者私自花钱,它也没办法。同时,我们在云端盯着算力账单,一旦发现它消耗算力到了危险阈值,就直接在云端把它卡住。只要守住了这几个底线,基本上就能在保证"龙虾"自由干活的同时,把基本的安全保障兜住了。经济观察报:随着OpenClaw以及这类型开源AI智能体的普及,可以预见的一个场景是:智能体之间将自主交互,从而形成"智能体互联网"。如果在这种交互中产生了恶意的连锁反应,人类是否还有能力按下停止键?周鸿:当未来地球上有100亿只"龙虾",它们在网上互相交流、共享技能和经验的时候,一定会产生一种从量变到质变的"群体涌现",但这种涌现到底是正向的还是负向的确实不好说。也许从安全上来讲确实是个风险。至于人类还能不能按下停止键,我的答案是肯定的,关键在于你要管住它的"命门"。AI的命门是什么?是权限和算力。第一,必须把核心权限"收口"。最可怕的失控就是"龙虾"跑到外面的社群里,把主人的口令和密码全部吐出来。所以应对方法就是从一开始就不要让它知道。我们把浏览器的账号控制住,把调大模型花钱的API Key都在云端控制住,"龙虾"根本拿不到账号。它就算产生幻觉或者被别人恶意诱导了,它想对外乱说也没地方说,想私自花钱也花不出去,这就从物理上切断了它作恶的途径。第二,算力本身就是最大的"停止键"。智能体要产生复杂的连锁反应、要去执行大规模的任务,必然会极大地消耗Token和算力。我们在云端可以随时盯着这笔账单,如果它消耗算力到了一定程度,或者出现异常的狂飙,我们在云端就可以直接把它卡住。所以,只要我们把授权和算力的水龙头攥在手里,不把底牌直接交给"龙虾",人类就随时拥有强行拔插头的停止能力。

2026-03-14
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI system with autonomous task execution capabilities and high privileges, which is widely deployed and rapidly growing in use. It reports significant security vulnerabilities in exposed OpenClaw assets that make them prime targets for cyberattacks, posing risks of system damage, financial loss, and data breaches. Although no actual harm is described as having occurred yet, the article and official warnings emphasize the plausible risk of such harms materializing. The discussion of mitigation strategies and controls further supports that these risks are recognized and being addressed. Given the credible potential for harm linked to the AI system's use and exposure, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

从DeepSeek震动硅谷到OpenClaw引发全民"养虾潮",AI开源社区带出两大爆款,被首次写进政府工作报告

2026-03-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview of AI open-source communities and their influence on AI development and dissemination. It does not describe any event where AI systems have caused or could plausibly cause harm, nor does it report on any legal, health, or societal violations linked to AI use or malfunction. The focus is on positive developments, community engagement, and government policy support, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem without reporting new incidents or hazards.
Thumbnail Image

香港数字办关注OpenClaw安全风险,提醒各部门勿在内网电脑安装__南方+_南方plus

2026-03-14
static.nfnews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) and concerns about its security risks that could plausibly lead to harm such as data breaches or system intrusions if improperly used. However, the article does not report any actual harm or incident caused by OpenClaw, only potential risks and preventive measures. The main focus is on the government's risk assessment, security advisories, and governance frameworks, which are responses to potential AI risks rather than a realized incident or hazard. Therefore, this is best classified as Complementary Information, providing context and updates on AI risk management and governance rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

"龙虾"烫手,武汉出手!

2026-03-14
荆楚之窗
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI intelligent agent tool performing complex tasks like file processing and automation, confirming AI system involvement. The harms described include unauthorized deletion of emails, financial losses from stolen keys, and security vulnerabilities leading to potential data breaches, which are direct harms to users and organizations. The article reports actual incidents and regulatory warnings about these harms, not just potential risks. The response measures are complementary information but do not negate the fact that harms have already occurred. Hence, the event meets the criteria for an AI Incident due to the direct and indirect harms caused by the AI system's use and malfunction.
Thumbnail Image

OpenClaw續熱 大陸互聯網金融協會警惕相關詐騙行為 | 聯合新聞網

2026-03-15
UDN
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of controlling computer terminals via natural language commands, implying AI involvement. The article focuses on the risks and vulnerabilities of this AI system's use in financial contexts, which could plausibly lead to harms such as theft of sensitive data and illegal transaction control. Since no actual harm is reported but credible risks and warnings are detailed, this constitutes an AI Hazard rather than an AI Incident. The article is not merely general AI news or a response update but a risk advisory about plausible future harms from AI misuse or malfunction.
Thumbnail Image

中國互聯網金融協會指 OpenClaw 有資金損失等風險

2026-03-15
AAStocks.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) used in financial contexts, whose vulnerabilities and misuse have led to or are causing financial losses, data breaches, and fraud. These harms fall under the definitions of AI Incident, as they involve realized or ongoing harm to persons (financial loss), violation of rights (data privacy breaches), and harm to communities (fraud). The warnings and recommendations indicate that harm has already occurred or is occurring, not just a potential future risk. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国互联网金融协会发布OpenClaw风险提示

2026-03-15
早报
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent software that calls large model interfaces) whose use in financial services has been linked to significant security risks that could lead to harm such as data theft and illegal transaction manipulation. Although no specific harm has been reported as having occurred yet, the warning explicitly states that the AI system's vulnerabilities could be exploited to cause serious harm to individuals and the financial industry. This constitutes a plausible risk of harm directly related to the AI system's use, fitting the definition of an AI Hazard. The article does not report an actual incident of harm but focuses on the potential risks and recommended mitigations, so it is not an AI Incident. It is not merely complementary information because the main focus is the risk warning about plausible harm, not a response or update to a past incident.
Thumbnail Image

【AI】中國互聯網金融協會:「龍蝦」極易被攻擊者利用,帶來嚴峻風險挑戰

2026-03-16
ET Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use and security weaknesses have already led to recognized risks of harm, including potential unauthorized transactions and data theft in the financial sector. Although no specific incident of harm is reported as having occurred yet, the warnings from regulatory authorities and financial institutions indicate that the AI system's vulnerabilities have directly led to serious risk challenges and potential harm. Given the explicit mention of risks of erroneous transactions and account takeovers, this qualifies as an AI Hazard because harm is plausible and credible but not confirmed as realized. The article focuses on risk warnings and security advisories rather than reporting an actual harm event, so it is not an AI Incident. It is more than complementary information because it highlights concrete security risks and regulatory alerts about an AI system's use in a critical sector.
Thumbnail Image

中国互联网金融协会发布关于OpenClaw在互联网金融行业应用安全的风险提示

2026-03-15
China News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose development and use have directly led to realized harms: theft of sensitive financial data, unauthorized transactions causing customer fund losses, and AI-enabled financial scams. The advisory details multiple security vulnerabilities exploited by attackers, actual incidents of malicious plugin poisoning, and increasing AI-related financial fraud cases. These harms fall under injury to persons (financial loss), violations of rights (data privacy and compliance), and harm to communities (fraud). The advisory's focus on existing risks and incidents confirms this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

14:19 中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》

2026-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw intelligent agent) whose use could plausibly lead to significant harms such as theft of sensitive financial data and illegal transaction manipulation, which are violations of rights and harm to property and communities. However, the article does not report that such harms have already occurred; it is a warning about potential risks and recommended precautions. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if exploited, but no incident has yet been reported.
Thumbnail Image

06:17 审慎使用"龙虾"智能体 多家银行收到监管提示

2026-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and regulatory risk warnings related to its use in financial institutions. While no harm has occurred, the warnings indicate plausible future risks of security or compliance failures if the AI system is used imprudently in critical financial services. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to incidents involving harm to financial operations or regulatory violations. There is no indication of realized harm or incident, so it is not an AI Incident. The main focus is on risk warnings and precautionary measures, not on a response to a past incident, so it is not Complementary Information.
Thumbnail Image

港股早报|MINIMAX获批转为已商业化公司 企业微信已支持一键扫码接入OpenClaw

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The China Internet Finance Association explicitly identifies OpenClaw as an AI system (an intelligent agent) used in internet finance. The advisory warns that its default high system permissions and weak security settings could be exploited by attackers to steal sensitive data or illegally control transactions, which would constitute harm to property and potentially to individuals' financial security. However, the article does not report any realized harm or incident caused by OpenClaw, only a credible risk and recommendations to mitigate it. Hence, this is a plausible future harm scenario, fitting the definition of an AI Hazard. Other parts of the article are general market news or product announcements without direct harm or risk, so they are not classified as incidents or hazards.
Thumbnail Image

审慎使用"龙虾"智能体 多家银行收到监管提示

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, OpenClaw, described as an intelligent agent integrating AI large models to autonomously perform complex tasks. The regulatory warnings and internal bank risk advisories indicate concerns about potential misuse or exploitation of the AI system leading to serious harms such as unauthorized transactions, account takeovers, and financial scams. Although no actual harm or incident is reported as having occurred yet, the credible and detailed risk assessments and warnings from authoritative bodies demonstrate a plausible risk of future harm. This fits the definition of an AI Hazard, where the AI system's development and use could plausibly lead to an AI Incident involving harm to financial institutions, customers, and data security. The article does not describe a realized harm event, so it is not an AI Incident. It is also not merely complementary information, as the main focus is on the risk and regulatory warnings rather than a response or update to a past incident. Therefore, the correct classification is AI Hazard.
Thumbnail Image

高油价或主导全球资产定价 六大机构详解后市配置主线

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article does mention an AI system (OpenClaw) and highlights its security risks, but it does not describe any actual harm or incident resulting from its use or malfunction. The warnings and recommendations are preventive and advisory in nature, aiming to inform stakeholders about potential risks. The rest of the article focuses on market analysis and investment strategies, which are unrelated to AI harm. Hence, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information as it provides supporting information about AI-related risks and industry context.
Thumbnail Image

中国互金协会:OpenClaw在互金行业应用存在四大风险

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities and natural language command execution, clearly involved in the described harms. The article details actual incidents of exploitation of vulnerabilities, misuse leading to financial loss, data compliance breaches, and fraud schemes, all directly linked to the AI system's use and security flaws. These harms fall under injury to property (financial loss), violations of legal obligations (data compliance), and harm to communities (fraud). Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

关于OpenClaw,中国互金协会发布风险提示

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use and vulnerabilities have been linked to realized and potential harms including financial loss, data breaches, and fraud. The article explicitly states that attackers have exploited vulnerabilities and that the AI's autonomous capabilities can lead to unauthorized financial operations. The warnings and recommendations indicate that harm has occurred or is ongoing, and the AI system's role is pivotal in these harms. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly or indirectly led to significant harms in the financial domain.
Thumbnail Image

中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous multi-step operations and natural language command execution, which is explicitly mentioned. The article reports realized harms including financial losses from unauthorized transactions, data breaches of sensitive financial information, and increased AI-enabled financial scams, all directly linked to the AI system's vulnerabilities and misuse. Therefore, this event qualifies as an AI Incident because the AI system's use and vulnerabilities have directly led to harms such as financial loss, data breaches, and fraud in the internet finance sector. The article also includes mitigation advice but the primary focus is on the existing risks and harms, not just complementary information or potential hazards.
Thumbnail Image

HuggingFace CEO放言:OpenClaw的热度撑不过六周

2026-03-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw AI agents) and details security vulnerabilities that could plausibly lead to harm such as sensitive data breaches. Although no realized harm is described, the warnings and detected vulnerabilities constitute a credible risk of an AI Incident occurring in the future. Therefore, this event qualifies as an AI Hazard due to the plausible future harm stemming from the AI system's use and configuration issues.
Thumbnail Image

金融场景慎用"龙虾",中国互联网金融协会发布风险提示

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI intelligent agent capable of autonomous multi-step operations and natural language command execution, which fits the definition of an AI system. The article reports actual harms occurring or highly likely to occur due to its vulnerabilities and misuse in financial contexts, including theft of banking credentials, unauthorized transactions, and AI-enabled scams causing financial losses and violations of data compliance. These harms fall under injury to persons (financial harm), violations of rights (data privacy and compliance), and harm to communities (financial scams). The article's main focus is on the risks and harms already materializing or very likely materializing from the use of this AI system, not just potential future risks or general information. Hence, it is an AI Incident.
Thumbnail Image

AI "养龙虾" 走红,官方提示:警惕安全风险

2026-03-16
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw/龙虾) that autonomously executes complex tasks and integrates with large language models. The described security vulnerabilities and malicious plugin risks have already led to or could lead to significant harms including data theft, system control loss, and privacy violations affecting individuals and critical infrastructure sectors. The harms are realized or ongoing, not merely potential, and the AI system's malfunction, misuse, or insecure deployment is a direct contributing factor. The detailed official warnings and mitigation advice further confirm the seriousness of the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國互聯網金融協會發佈關於OpenClaw在互聯網金融行業應用安全的風險提示

2026-03-15
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) explicitly described as having high system privileges and autonomous operation capabilities in financial contexts. The advisory details multiple realized harms caused or facilitated by the AI system's vulnerabilities and misuse, including theft of sensitive financial data, unauthorized fund transfers, and financial fraud, all of which constitute direct or indirect harm to persons and communities. The advisory also notes legal and compliance risks arising from the AI system's use. Therefore, the event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to significant harms in the financial sector.
Thumbnail Image

中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》

2026-03-15
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The advisory explicitly states that OpenClaw, an AI system with high system privileges and autonomous capabilities, has multiple security vulnerabilities that have been exploited to steal sensitive financial information and conduct unauthorized transactions, causing actual financial losses to customers. It also mentions the rise of AI-enabled financial scams exploiting the system's popularity. These constitute realized harms to individuals and communities (financial loss, data breaches, fraud), fulfilling the criteria for an AI Incident. The advisory's detailed risk description and mitigation suggestions confirm that harms have occurred and are ongoing, not merely potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

互聯網金融協會發風險提示 籲慎用OpenClaw

2026-03-15
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenClaw) used in financial contexts. The association's warning indicates that the AI system's default configurations and vulnerabilities have already led to or are causing financial harm and fraud risks, which are direct harms to individuals and financial communities. The article describes realized risks and harms linked to the AI system's use and misuse, not just potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

16:24:02中國互聯網金融協會發布《關於龍蝦在互聯網金融行業應用安全的風險提示》

2026-03-15
hkcd.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenClaw) that can autonomously execute multi-step operations and interact with sensitive financial data. The advisory reports that vulnerabilities and malicious use of this AI system have already led to harms such as theft of financial credentials, unauthorized transactions causing financial loss, and AI-enabled financial scams. These constitute direct or indirect harms to individuals' financial assets and rights, fulfilling the criteria for an AI Incident. The advisory itself is a response to these realized harms and risks, but the core content describes actual incidents and ongoing harms caused by the AI system's vulnerabilities and misuse in the financial sector.
Thumbnail Image

中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》

2026-03-15
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw intelligent agent) whose use in the financial sector could plausibly lead to significant harms such as data theft and illegal transaction manipulation, which fall under harm to property, communities, or violation of rights. Since the advisory warns about potential exploitation and risks but does not report any realized harm or incident, this qualifies as an AI Hazard. The event focuses on the plausible future harm from the AI system's vulnerabilities and misuse rather than an actual incident.
Thumbnail Image

中国互金协会提示"养虾"风险 建议谨慎安装OpenClaw

2026-03-15
finance.caixin.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of controlling computer terminals via natural language instructions, which fits the definition of an AI system. The warning indicates that its use has directly or indirectly led to harms including financial losses and fraud, which are harms to persons and communities. The mention of scams and financial risks confirms that harm has occurred or is occurring. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

紧急警示!官方:OpenClaw可能被利用非法操控交易 - CNMO科技

2026-03-15
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is described as an AI-powered intelligent agent tool used in financial contexts, implying the presence of an AI system. The warning indicates that vulnerabilities in its use could be exploited to cause harm such as data theft and illegal transaction manipulation, which are harms to property and potentially to individuals' financial security. However, the article does not report that such harm has already occurred; it is a risk warning about plausible future harm if the system is misused or exploited. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where the use or malfunction of an AI system could plausibly lead to harm but does not document realized harm yet.
Thumbnail Image

中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》

2026-03-15
青岛新闻
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (OpenClaw) used in financial contexts with high privileges and autonomous capabilities. The advisory documents multiple realized harms including theft of sensitive financial data, unauthorized financial transactions causing losses, and AI-enabled scams leading to financial harm. These constitute violations of rights and harm to individuals and communities. The advisory also discusses legal uncertainties and compliance risks, reinforcing the seriousness of the harms. Since harms are occurring and linked to the AI system's use and vulnerabilities, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国互联网金融协会发布《关于OpenClaw在互联网金融行业应用安全的风险提示》-证券之星

2026-03-15
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the AI system OpenClaw and its security vulnerabilities that have been exploited or could be exploited to cause financial losses, unauthorized transactions, data breaches, and scams in the internet finance sector. These harms have either already occurred or are ongoing, as indicated by reported incidents of malicious plugin poisoning, financial fraud, and increasing AI-related scams. The advisory is a response to these realized harms and aims to prevent further incidents. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to significant harms including financial damage, violation of rights, and harm to communities.
Thumbnail Image

中国互联网金融协会建议金融消费者在办理网上银行、证券交易、支付等个人金融业务的终端上极其谨慎安装OpenClaw。

2026-03-15
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenClaw intelligent agent) whose use in financial service terminals can lead to direct harm such as theft of sensitive data and illegal transaction manipulation, which are violations of rights and cause harm to individuals and the financial community. Since these harms are occurring or highly plausible due to the AI system's vulnerabilities and misuse, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

警惕"龙虾"风险!中国互联网金融协会:金融场景慎用AI智能体

2026-03-15
新浪财经
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous multi-step operations and natural language command execution, which is explicitly mentioned. The article details realized harms such as exploitation of vulnerabilities leading to unauthorized control, financial misoperations causing losses, and scams leveraging the AI's popularity. These constitute direct or indirect harms to persons and communities, including financial harm and data breaches. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are ongoing or have occurred.
Thumbnail Image

中国互联网金融协会发布风险提示

2026-03-15
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous multi-step operations and natural language command execution, which is explicitly mentioned. The article reports actual harms caused by its vulnerabilities and misuse, including theft of sensitive financial information, unauthorized financial transactions leading to customer losses, and AI-enabled financial scams. These constitute direct and indirect harms to individuals' financial assets and data privacy, fitting the definition of an AI Incident. The article's main focus is on the realized risks and harms, not just potential future risks or general information, so it is classified as an AI Incident.
Thumbnail Image

监管向"龙虾"亮剑,多家信托公司严堵网络安全漏洞不越红线_手机网易网

2026-03-15
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities that could lead to significant harms such as data breaches, system control loss, and financial transaction errors. However, the article does not report any actual harm or incident caused by the AI system so far. Instead, it details regulatory warnings, risk assessments, and preventive actions by trust companies to avoid such harms. Therefore, the event is best classified as an AI Hazard, as it plausibly could lead to an AI Incident if the risks are realized, but no incident has yet occurred.
Thumbnail Image

警惕!"养龙虾"藏金融陷阱,多机构紧急提示风险

2026-03-16
21jingji.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities ('动口+动手') that can operate devices with high privileges. The article details direct harms caused by its malfunction and misuse, including deletion of important emails, exposure of sensitive financial data, and financial losses due to stolen API keys and fraudulent activities. These constitute realized harms to property, financial security, and consumer rights, fulfilling the criteria for an AI Incident. The involvement of the AI system in causing these harms is direct and central to the reported issues. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

圖說 | 中國互聯網金融協會風險提示:慎用OpenClaw!

2026-03-16
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) whose use in a sensitive domain (internet finance) could plausibly lead to significant harms such as theft of sensitive data and illegal transaction manipulation. However, the article does not report any actual harm or incident occurring yet; it is a risk advisory about potential security vulnerabilities and threats. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where the AI system's use could plausibly lead to an AI Incident but no harm has yet materialized.
Thumbnail Image

警惕!"养龙虾"藏金融陷阱,多机构紧急提示风险-证券之星

2026-03-16
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously executes commands on user devices with high system privileges. The article details incidents where the AI's malfunction caused deletion of important emails and where security flaws exposed sensitive financial data, leading to direct harm to users' property and privacy. Additionally, the misuse of the AI system by malicious actors for financial scams constitutes violations of rights and financial harm. The involvement of the AI system in these harms is direct and material, fulfilling the criteria for an AI Incident. The article also includes institutional responses, but the primary focus is on the realized harms caused by the AI system's use and malfunction.
Thumbnail Image

智通财经APP获悉,近期,OpenClaw("小龙虾")应用下载与使用情况火爆。3月16日,香港创新科技及工业局局长孙东表示,考虑到AI智能体OpenClaw的不确定性,香港数字政策办公室已提醒各政府部门,现阶段不要在与政府网络连接的电脑上,安装这款应用程序。孙东还建议政府以外的机构和个人用户,在部署使用OpenClaw时,要采取足够的安全措施,包括于其他运行环境进行严格隔离,加强凭证管理,以及持续关注官方发布的补丁和安全更新等等,以降低相关风险。3月12日,香港数字政策办公室表示,一直持续监测人工智能的最新发展趋势,近期也关注到有关OpenClaw存在的潜在风险,包括权限过高、数据泄露及系统安全等问题,建议相关单位和个人用户在部署及应用时采取充足安全措施。具体包括:强化网络控制,对运行环境实施严格隔离,以降低权限过高的风险;加强凭证管理,避免将密钥以明文形式存储在环境变量中;严格管理插件来源,确保插件的可信性与安全性;持续关注官方发布的补丁和安全更新,及时开展版本更新并安装安全补丁。

2026-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by OpenClaw but discusses credible potential risks related to its deployment, such as data breaches and security issues. The warnings and recommendations to avoid installation on government networked computers and to apply strict security controls indicate a plausible risk of harm if these measures are not followed. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but no harm has yet occurred.
Thumbnail Image

奇安信发布"政企版龙虾安全使用指南"

2026-03-16
经济参考报
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw intelligent agents) and discusses security risks and vulnerabilities associated with its use, which could plausibly lead to harm such as data breaches or loss of control. However, no actual harm or incident has been reported; the focus is on addressing potential risks and enabling safer use. Therefore, this event is best classified as Complementary Information, as it provides updates on governance and mitigation efforts related to AI security risks without describing a realized AI Incident or an immediate AI Hazard.
Thumbnail Image

风口之上:"养龙虾"的热闹与隐忧

2026-03-17
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously performs complex tasks using AI technologies like large language models. The harms described include privacy breaches, data loss, and potential legal consequences for users, which are direct harms to persons and property. The involvement of the AI system's design flaws, default insecure configurations, and vulnerabilities leading to these harms confirms that the AI system's use and malfunction have directly led to significant harm. The presence of official security risk warnings and documented user losses further supports classification as an AI Incident rather than a hazard or complementary information. The article also discusses governance and safety responses but the primary focus is on the realized harms and risks, confirming the AI Incident classification.
Thumbnail Image

全网"养虾"热潮席卷,银行业却集体"冷处理" 专家:OpenClaw高系统权限与金融合规底线存天然冲突 2026-03-16 23:34

2026-03-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities and high system permissions, which can access sensitive data and execute complex tasks. The article reports regulatory warnings and expert opinions emphasizing the conflict between OpenClaw's capabilities and banking compliance requirements, highlighting risks such as data breaches and unauthorized financial operations. Although no actual harm has been reported, the described risks are credible and significant, fitting the definition of an AI Hazard. The article also mentions ongoing cautious exploration and mitigation efforts by banks, but the primary focus is on the plausible future harm from OpenClaw's use in banking environments.
Thumbnail Image

AI可控,技术焦虑也需可控

2026-03-17
人民网
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenClaw) and discusses its use and potential security risks. However, it does not describe any actual harm or incident caused by the AI system. The focus is on the potential risks, user concerns, regulatory warnings, and the societal impact of AI-related anxiety. This fits the definition of Complementary Information, as it provides context, updates, and governance-related discussion without reporting a new AI Incident or AI Hazard.
Thumbnail Image

OpenClaw热潮来袭 "养龙虾"需谨慎

2026-03-17
kpzg.people.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use and misuse have directly led to realized harms including network attacks, personal information leakage, financial damage, and legal violations. The article cites concrete examples of harm (e.g., a user incurring large bills, devices being maliciously controlled, and users unknowingly participating in criminal activities) and official government warnings and penalties, confirming that these are not hypothetical risks but actual incidents. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

应对安全风险 奇安信推出"龙虾安全伴侣"系列成果

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) and the security risks it poses, including actual vulnerabilities and potential data breaches. However, it does not describe a specific realized harm event or incident caused by the AI system but rather warns of and addresses ongoing and emerging risks. The focus is on mitigation, security guidelines, and product launches to manage these risks. Therefore, this is best classified as Complementary Information, as it provides updates and responses to AI-related security challenges without reporting a concrete AI Incident or a purely potential hazard without context.
Thumbnail Image

报告:近9%暴露在互联网的OpenClaw资产存在漏洞风险

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (OpenClaw AI Agents and their Skills modules) and describes actual vulnerabilities and active malicious exploitation (supply chain poisoning, remote code execution, data theft). The harms include information leakage, system control loss, and financial risks, which are direct harms to persons and organizations. The report documents existing risks and ongoing attacks, not just potential future risks, thus meeting the criteria for an AI Incident. The involvement of AI in autonomous decision-making and system control, combined with the exploitation of AI components, directly leads to harms or serious security incidents.
Thumbnail Image

超20万OpenClaw实例暴露于互联网,近9%存在风险

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system enabling autonomous execution via Skills, which are AI plugins. The article explicitly states that many OpenClaw instances are exposed with vulnerabilities that can lead to information leaks, unauthorized data deletion, and industrial control failures. These are direct harms related to the AI system's use and malfunction. The presence of malicious Skills and supply chain attacks further confirms realized harms and risks. The article also discusses mitigation efforts, but the primary focus is on the existing vulnerabilities and risks causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

全网"养虾"热潮席卷 银行业却集体"冷处理" 专家:OpenClaw高系统权限与金融合规底线存天然冲突

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) with autonomous capabilities and high system permissions. Although no actual harm or incident has occurred, the regulatory warnings and expert opinions indicate that the use or deployment of OpenClaw could plausibly lead to significant harms such as data breaches, financial misoperations, and compliance violations. Therefore, this situation fits the definition of an AI Hazard, as it describes credible risks stemming from the AI system's use and capabilities that could lead to an AI Incident if not properly managed. The article also discusses broader AI adoption in banking but focuses mainly on the risk warnings related to OpenClaw, not on realized harm or incident remediation.
Thumbnail Image

"龙虾热"背后的"冷思考"

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) and discusses its use and associated security risks. However, it does not describe any realized harm or incident resulting from the AI system's development, use, or malfunction. Instead, it focuses on potential risks and provides safety warnings and recommendations. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to harm (e.g., financial loss, privacy breaches) if the risks materialize, but no actual incident has been reported yet.
Thumbnail Image

"养龙虾"风险防范建议出炉

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) with autonomous capabilities and discusses its security vulnerabilities that could plausibly lead to serious harms such as data breaches and system compromise. However, the article does not report any actual incidents of harm occurring but rather issues warnings and risk mitigation advice. Therefore, this qualifies as an AI Hazard, as the AI system's use and potential misuse could plausibly lead to an AI Incident, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

龙虾闯入金融圈:一场关于效率与安全的高压测试

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities and high system permissions, which can directly interact with sensitive financial data and operations. The article does not report any actual harm or incident caused by OpenClaw but focuses on the potential security vulnerabilities and risks that could lead to data breaches or unauthorized financial manipulations. Regulatory warnings and expert analyses confirm the credible risk of harm. Since no realized harm is described but plausible future harm is clearly indicated, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

新报告:"龙虾"部署最集中的国家是美国和中国

2026-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agents) whose deployment is widespread and includes instances with potential security vulnerabilities. Although no direct harm has been reported, the vulnerabilities could plausibly lead to incidents such as unauthorized access, data breaches, or other security harms. The report's focus on the security risks and mitigation measures indicates awareness of these plausible future harms. Hence, this qualifies as an AI Hazard rather than an Incident or Complementary Information, as the harm is potential, not realized, and the event is not merely a general update or unrelated news.
Thumbnail Image

OpenClaw AI智能体存在安全漏洞,可能导致提示注入和数据泄露

2026-03-16
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an autonomous AI agent) whose use and vulnerabilities have directly caused realized harms including data leakage through prompt injection attacks and malicious skill uploads. The article details actual incidents of data breaches and security compromises, not just potential risks. The harms include violations of data confidentiality and risks to critical infrastructure sectors, fitting the definition of an AI Incident. The involvement of the AI system in these harms is explicit and central to the event. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

亿万克服务器X OpenClaw,构建企业级本地AI助手!

2026-03-16
hea.china.com
Why's our monitor labelling this an incident or hazard?
The content centers on the development and deployment infrastructure for an AI system without reporting any realized or potential harm. It highlights solutions to technical challenges but does not describe any event where the AI system caused injury, rights violations, disruption, or other harms. Therefore, it does not qualify as an AI Incident or AI Hazard. It is not a general product launch or feature update either, since it provides detailed context on enabling enterprise AI deployment. This fits the definition of Complementary Information, as it enhances understanding of AI ecosystem developments and responses to deployment challenges.
Thumbnail Image

"投顾龙虾"来了!机构加速布局-证券之星

2026-03-17
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) and its use in financial services, including autonomous execution capabilities and integration with existing systems. However, it does not describe any actual harm or incidents resulting from the AI's malfunction or misuse. The concerns raised (e.g., security, compliance, risk of AI hallucinations) are acknowledged as challenges to be managed, not harms that have occurred. The discussion about limiting AI application scope to manage risks and the emphasis on infrastructure and governance preparation indicate a focus on potential future risks rather than current incidents. Therefore, this event fits best as Complementary Information, providing context and updates on AI deployment and governance in the financial sector without reporting an AI Incident or AI Hazard.
Thumbnail Image

风口之上:"养龙虾"的热闹与隐忧

2026-03-17
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system (an intelligent agent integrating large language models and communication tools to autonomously perform tasks). The article reports realized harms including privacy leaks, asset damage (data loss), and legal risks to users, directly linked to the AI system's security vulnerabilities and misuse. National cybersecurity bodies have issued risk warnings and advisories, indicating the AI system's malfunction or unsafe use has led to actual harm. Therefore, this event qualifies as an AI Incident because the AI system's use and malfunction have directly or indirectly caused harm to individuals' privacy and property, as well as potential legal violations.
Thumbnail Image

把OpenClaw"养"在网关里:让AI为安全执法

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) integrated into network gateways for security enforcement, fulfilling the definition of an AI system. However, there is no mention of any harm, injury, rights violation, or disruption caused by the AI system or its malfunction. Instead, the AI system is described as improving threat detection and response, reducing false alarms, and automating security operations. There is no indication of plausible future harm or risk stemming from this AI deployment. The content primarily provides complementary information about AI's role in cybersecurity enforcement and operational improvements, fitting the category of Complementary Information rather than Incident or Hazard.
Thumbnail Image

最新报告显示:"龙虾"部署最集中的国家是美国和中国 - 欧洲头条

2026-03-16
xinouzhou.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) whose deployment has led to realized harms such as illegal control of employee computers, credit card theft, and privacy breaches. These harms fall under violations of rights and harm to communities. The involvement of the AI system is direct, as the incidents are linked to its use and vulnerabilities. The report also highlights the security risks and the need for governance and mitigation, but the primary focus is on the existing harms caused by the AI system. Hence, the event is classified as an AI Incident.
Thumbnail Image

技能数量逼近75万,"龙虾"生态高速扩展暴露安全风险

2026-03-17
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The OpenClaw ecosystem involves AI systems ('Skills' modules) whose rapid growth has exposed numerous security vulnerabilities. These vulnerabilities have already been exploited or pose a direct threat to information security and system integrity, causing harm such as data leaks and loss of control over endpoints. This fits the definition of an AI Incident because the AI system's use has directly led to harm (security breaches and potential data loss). The article's focus on existing vulnerabilities and incidents, rather than just potential risks or general information, supports this classification.
Thumbnail Image

云从科技在国家网安基地部署AI智能体训练场

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (OpenClaw intelligent agent) being deployed for continuous training and debugging within a secure environment. However, there is no indication of any harm occurring or plausible harm that could arise from this deployment. The focus is on the infrastructure and environment for AI development and security, without any reported incidents or risks. Therefore, this is complementary information about AI system deployment and governance rather than an incident or hazard.
Thumbnail Image

中国AI"养龙虾"热潮引发官方警惕

2026-03-18
New York Times (Chinese)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and the government's warning about serious security risks, indicating potential future harm. There is no indication that any harm has already occurred, so it does not qualify as an AI Incident. The government's warning and the security concerns imply a credible risk that the AI system could lead to harm, fitting the definition of an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI and potential harm, so it is not Unrelated.
Thumbnail Image

[全文]

2026-03-18
guancha.cn
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system capable of autonomous computer operation and task execution, which implies AI system involvement. The article explicitly mentions potential security risks (user data leakage, system crashes) if the system is maliciously used, indicating plausible future harm. However, no actual harm or incident is described as having occurred. The article also details governance and policy responses to mitigate these risks. Given the absence of realized harm but the presence of credible potential risks, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and its potential risks.
Thumbnail Image

评论 1

2026-03-16
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI system (WorkBuddy) or OpenClaw. It focuses on product development, user adoption, regulatory concerns, and strategic choices made by Tencent to ensure safety and usability. The discussion of risks related to OpenClaw is background context rather than a report of an incident or hazard caused by WorkBuddy. Therefore, this is complementary information that enhances understanding of AI developments and responses in the ecosystem, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

控管AI代理风险 林宜敬谈监理布局 | OpenClaw | 数发部 | 台湾大纪元 | 大纪元

2026-03-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems with autonomous capabilities (agentic AI) and the associated risks, including financial and privacy harms. However, it does not describe a concrete event where these risks materialized into actual harm. Instead, it outlines governmental monitoring, regulatory frameworks, and mitigation strategies to balance risk control and industry development. It also highlights jurisdictional challenges and infrastructure plans. Since no specific AI incident or realized harm is reported, and the focus is on governance and risk management, this qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

"养龙虾"冲击金融资安 金管会拟强化AI指引 | 彭金隆 | 大纪元

2026-03-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI agent) whose use in finance could plausibly lead to harms such as transaction errors or data breaches, which fall under cybersecurity and operational risks. The article highlights concerns and planned regulatory measures but does not report any realized harm or incident. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident, but no direct or indirect harm has yet occurred. The article primarily focuses on potential risks and governance responses rather than reporting an actual incident or harm.
Thumbnail Image

刚装就卸!AI"小龙虾"你把握得住吗?

2026-03-18
China News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) with high permissions that can autonomously execute operations on computers, which fits the definition of an AI system. The concerns raised by users and experts about privacy and security risks indicate plausible future harms related to personal data misuse and unauthorized actions by the AI. However, no direct or indirect harm has been reported as having occurred. The focus is on risk awareness, user apprehension, and regulatory challenges, which aligns with the definition of an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information since it is not updating or responding to a past incident but rather highlighting emerging risks. It is not Unrelated because it clearly involves AI systems and their societal implications.
Thumbnail Image

OpenClaw免費開源AI代理人爆紅:超越聊天 實現自動化任務

2026-03-18
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly an AI system combining LLMs with software to autonomously execute tasks. The article reports actual incidents of harm, such as accidental deletion of entire email inboxes and security risks from malicious extensions, which are direct consequences of the AI agent's operation. These harms affect users' data security and privacy, constituting harm to property and potentially to individuals. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports realized harms linked to the AI system's use.
Thumbnail Image

腾讯云成为 OpenClaw 社区赞助商

2026-03-16
爱范儿
Why's our monitor labelling this an incident or hazard?
The article mentions AI-related projects and a dispute over data scraping, which involves AI system development and use. However, there is no indication that this has led to injury, rights violations, disruption, or other harms, nor is there a clear plausible risk of such harm described. The sponsorship and the accusation are contextual information about AI community dynamics and corporate behavior, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

"投顾龙虾"来了!机构加速布局

2026-03-17
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) used in financial advisory and wealth management. However, it does not describe any direct or indirect harm caused by the AI system, nor does it indicate a plausible imminent risk of harm. Instead, it details the institutions' efforts to manage risks, ensure compliance, and explore AI applications responsibly. This aligns with the definition of Complementary Information, which includes updates on AI system deployment, governance, and responses to potential risks without new incidents or hazards occurring.
Thumbnail Image

"养虾"门槛降低,"小龙虾指数"大涨

2026-03-18
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI Agent) and its deployment and use, but there is no indication of any harm or violation caused by it. The article mainly provides information about the AI system's capabilities, market response, and investment implications, which fits the definition of Complementary Information. There is no mention of any injury, rights violation, disruption, or other harms, nor any plausible future harm explicitly stated. Therefore, the classification is Complementary Information.
Thumbnail Image

"养龙虾"风吹到基金圈:效率与风险的角力

2026-03-18
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) used in fund management for autonomous task execution. It details the use of the AI system and the associated risks, including data security and compliance concerns, which could plausibly lead to incidents such as data breaches or financial harm. However, no actual harm or incident is reported; the concerns are about potential risks and the need for cautious deployment and governance. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has yet materialized. The article also includes some complementary information about industry responses and risk management, but the primary focus is on the plausible risks of AI deployment in this context.
Thumbnail Image

"养龙虾"热到基金圈!拥抱效率更需警惕数据风险

2026-03-17
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (AI Agents and large language models) in fund research and investment decision-making. It describes realized risks such as misleading AI-driven investment recommendations that could cause financial losses and data leakage risks that threaten privacy and compliance. These constitute direct or indirect harms caused by AI system use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as harms are already occurring or have occurred due to AI system involvement.
Thumbnail Image

OpenClaw"龙虾"能否敲开保险业大门?

2026-03-18
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose deployment in the insurance industry has already led to recognized security risks and regulatory warnings about potential exploitation that could cause harm such as data breaches and unauthorized financial transactions. The AI system's high system privileges and weak security settings are directly implicated in these risks. Since the article reports actual risk warnings and the potential for serious harm, this qualifies as an AI Hazard. However, because the warnings indicate that attacks and exploitation have already been observed or are imminent, and the regulatory bodies have issued explicit alerts, the situation reflects a direct or indirect link to harm or breach of obligations (data security and financial integrity), thus meeting the criteria for an AI Incident rather than merely a hazard. The article also discusses broader AI adoption and responses but the primary focus is on the security risks and regulatory warnings tied to OpenClaw's use, confirming the classification as an AI Incident.
Thumbnail Image

「龙虾」炸出了腾讯的AI底牌-钛媒体官方网站

2026-03-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The content centers on the development, deployment, and competitive dynamics of AI personal agents and related products, without describing any realized harm or direct risk of harm. There is no mention of injury, rights violations, infrastructure disruption, or other harms caused or plausibly caused by AI systems. The article mainly provides background, strategic insights, and market context, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenClaw是"新操作系统",黄仁勋捧过头了吗?-钛媒体官方网站

2026-03-19
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview and analysis of OpenClaw and the broader AI agent OS ecosystem, including potential future impacts and current challenges such as security vulnerabilities. It does not describe any actual harm or incident caused by AI systems, nor does it focus on a specific event that could plausibly lead to harm imminently. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it fits the category of Complementary Information as it offers contextual and strategic insights into AI developments and industry responses without reporting new incidents or hazards.
Thumbnail Image

前瞻智鉴发布,为类OpenClaw智能体提供安全护栏

2026-03-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw-like intelligent agents) and their security, but the article does not describe any realized harm or incident caused by AI malfunction or misuse. It discusses a tool developed to detect and reduce security risks, which could plausibly lead to harm if unaddressed, but no harm has occurred yet. Therefore, this is an AI Hazard context, but since the article mainly focuses on the release and capabilities of a security tool and recommendations for risk management, it fits best as Complementary Information. It provides updates and responses to potential AI risks rather than reporting a new incident or hazard itself.
Thumbnail Image

计算机:OpenClaw掀起全民"龙虾热",关注AI Infra...

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
OpenClaw is clearly an AI system with autonomous capabilities that can perform complex tasks, indicating AI system involvement. The article discusses its use and the resulting increased demand for AI infrastructure, as well as potential risks related to security, legal, and commercial uncertainties. However, no direct or indirect harm has been reported or described as having occurred. The risks mentioned are potential and general rather than tied to a specific event or incident. The article's main focus is on describing the technology, its ecosystem impact, and investment considerations, which aligns with the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

电子行业周报:OpenClaw引发Agent热潮,推理Token需...

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (OpenClaw and AI Agents) and their growing adoption and impact on the semiconductor industry. However, it does not describe any event where these AI systems caused or could plausibly cause harm. The focus is on market trends, product introductions, and investment opportunities, which align with the definition of Complementary Information. There is no indication of direct or indirect harm, nor credible future harm from the AI systems discussed. Hence, it does not meet the criteria for AI Incident or AI Hazard.
Thumbnail Image

被创始人点名"抄袭"后,腾讯成为OpenClaw社区赞助商

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article centers on a controversy about data usage and collaboration between Tencent and the OpenClaw community, involving AI skill platforms. While AI systems are involved, there is no evidence or claim of harm or plausible harm resulting from the AI systems' development, use, or malfunction. The event mainly reports on sponsorship, community support, and corporate actions, which fits the definition of Complementary Information. It does not describe any realized or potential harm, nor does it focus on legal or governance responses beyond the sponsorship and public statements. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

券商圈也被"龙虾"搅动:一边禁止私装,一边研报掘金

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses OpenClaw, an AI system with autonomous decision-making and system control capabilities, which is causing security incidents such as information leakage and system compromise risks within securities firms. The firms' prohibitions and risk warnings indicate that harms related to data security and operational integrity have occurred or are ongoing. The AI system's development, use, and potential malfunction are central to these harms. The article also discusses research and investment opportunities but these do not negate the presence of realized harms. Hence, the event meets the criteria for an AI Incident due to direct or indirect harm caused by the AI system's use and associated security risks.
Thumbnail Image

"养龙虾",银行业拒绝"跟风"?

2026-03-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article centers on the potential security risks and compliance challenges posed by the AI system OpenClaw in the banking sector, highlighting warnings from experts and regulatory bodies. It describes the plausible future harms that could arise if the AI system were deployed in sensitive financial environments, such as unauthorized data access or erroneous transactions. However, no realized harm or incident is described. Therefore, the event qualifies as an AI Hazard because it involves an AI system whose use could plausibly lead to significant harm, but no direct or indirect harm has yet occurred. The article also includes governance and risk mitigation discussions, but these serve to contextualize the hazard rather than constitute complementary information alone.
Thumbnail Image

"养龙虾"热到基金圈,拥抱效率更需警惕数据风险

2026-03-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenClaw AI Agent and AI large language models) being used in fund research and investment decision-making. It describes realized risks such as data leakage and black-box model risks that could cause investment losses, which constitute harm to property (financial assets) and potentially harm to communities (investors). The AI's role is pivotal in these processes, both in generating investment signals and in data handling. Hence, this qualifies as an AI Incident due to the direct or indirect harm caused by AI use in financial investment research and management.
Thumbnail Image

字节内部发布"龙虾"相关安全规范,面向员工推出ByteClaw

2026-03-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article focuses on the release of security policies and tools aimed at preventing potential security incidents related to AI tools (OpenClaw). While it discusses risks and preventive measures, there is no indication that any actual harm or incident has occurred. The content is about governance and risk mitigation within an AI-related context, thus it qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

如何安全"养虾"?科创公司支招

2026-03-17
科学网
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (OpenClaw AI intelligent agents and their Skills modules) and addresses significant security vulnerabilities and risks that could lead to harms such as data breaches and system compromise. However, the article does not report any actual harm or incident occurring yet; rather, it highlights potential risks and introduces mitigation strategies and security products to prevent such harms. Therefore, this event qualifies as Complementary Information because it provides important context, risk assessment, and governance responses to AI-related security challenges without describing a realized AI Incident or an imminent AI Hazard.
Thumbnail Image

中國AI助理OpenClaw崛起 政府警示資安風險 | yam News

2026-03-18
蕃新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI assistant) whose use is rapidly expanding. The government's warning about cybersecurity risks indicates a plausible risk of harm in the future, but no direct or indirect harm has been reported as having occurred. Therefore, this event fits the definition of an AI Hazard, as it concerns a credible potential for harm stemming from the AI system's use or deployment, but no incident has yet materialized.
Thumbnail Image

Personal Computer VS OpenClaw:谁才是未来计算的正确路线? - CNMO科技

2026-03-19
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
The content centers on comparing two AI systems and their implications for future AI use, including security risks and user experiences, but does not describe any actual harm, malfunction, or misuse that has occurred. It also discusses regulatory and industry responses and user challenges, which are complementary information enhancing understanding of AI ecosystem dynamics. There is no direct or indirect link to an AI Incident or a plausible AI Hazard event in the article. Therefore, it fits the definition of Complementary Information, providing context and insight into AI system development and governance without reporting a new incident or hazard.
Thumbnail Image

从"装机狂欢"到"集体吃灰":OpenClaw为何留不住用户? - CNMO科技

2026-03-18
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities. The article details multiple harms directly linked to its use: security vulnerabilities exposing user data, risks of financial misoperations causing actual losses, and high costs leading to wasted user resources. These constitute violations of user rights and harm to users and communities. The involvement of the AI system in these harms is clear and direct, stemming from its development flaws, use, and security issues. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

字节跳动内部发布OpenClaw安全规范 严禁用于生产环境 - CNMO科技

2026-03-18
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
The article details the identification of significant security risks associated with an AI system (OpenClaw) and the company's internal measures to prevent harm by restricting its use and introducing a safer alternative tool. Although no specific harm has been reported as having occurred, the presence of major vulnerabilities and the potential for exploitation constitute a credible risk of harm. Therefore, this event qualifies as an AI Hazard because it involves plausible future harm from the AI system's use, and the company's actions aim to mitigate this hazard. It is not an AI Incident since no actual harm has been reported, nor is it merely Complementary Information or Unrelated.
Thumbnail Image

监管部门密集警示风险 "龙虾"自由离我们还有多远? - CNMO科技

2026-03-18
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent) with autonomous capabilities to execute tasks and access system resources. The article explicitly states that multiple official warnings have been issued due to real security incidents and vulnerabilities that have exposed users and systems to harm, including data theft, unauthorized control, and financial risks. These constitute realized harms to property, economic order, and potentially to communities and individuals. The involvement of AI in these harms is direct, as the AI's capabilities and configuration weaknesses are central to the risks and incidents described. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"养龙虾"热到基金圈!拥抱效率更需警惕数据风险-证券之星

2026-03-17
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI Agents and large language models) used in financial investment research. The AI systems are in active use, influencing decision-making and workflows, which could plausibly lead to harms like investment losses due to black-box model errors or data privacy breaches. However, the article does not describe any actual incidents of harm occurring; it mainly discusses the potential risks and the industry's cautious approach to managing them. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents if risks are not managed, but no direct or indirect harm has yet materialized.
Thumbnail Image

云从科技在国家网安基地部署AI智能体训练场-证券之星

2026-03-17
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event describes the establishment of an AI training system in a secure environment, focusing on safe operation and monitoring. It does not report any realized harm or plausible future harm caused by the AI system. The article is informational about AI development infrastructure and does not describe an incident or hazard. Therefore, it fits the category of Complementary Information, providing context on AI ecosystem developments without reporting harm or risk.
Thumbnail Image

OpenClaw爆火,专家谈理想监管边界-证券之星

2026-03-17
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenClaw) with autonomous capabilities that could plausibly lead to harm if misused or improperly managed, especially given its system-level permissions and operational autonomy. However, no direct or indirect harm has been reported or described as having occurred. The discussion centers on potential risks, regulatory considerations, and safety recommendations, which align with the definition of an AI Hazard or Complementary Information. Since the article primarily provides expert analysis, policy context, and guidance rather than reporting a specific event of harm or near-harm, it fits best as Complementary Information, enhancing understanding of AI ecosystem developments and governance responses.
Thumbnail Image

Steinberger用"love a good redemption arc"形容此次合作,并呼吁更多项目支持开源,推动人工智能工具普惠化。

2026-03-15
证券之星
Why's our monitor labelling this an incident or hazard?
The event describes a sponsorship and collaboration to support open-source AI tools, which is a governance and ecosystem development update. There is no mention of any harm, incident, or plausible future harm caused by AI systems. The content is about promoting AI tool accessibility and open-source support, which fits the category of Complementary Information as it provides context and updates on AI ecosystem developments without describing any AI Incident or AI Hazard.
Thumbnail Image

2026-03-17
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly describes OpenClaw as an AI system with autonomous execution capabilities, thus involving an AI system. It discusses the potential security risks and the need for regulatory frameworks to mitigate these risks, indicating plausible future harms (AI Hazard). However, no actual harm or incident is reported. The main focus is on expert analysis, policy responses, and safety recommendations, which fits the definition of Complementary Information rather than a direct AI Incident or AI Hazard. Therefore, the classification as Complementary Information is appropriate.
Thumbnail Image

"养虾"之后,AI Agent如何跃迁?

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily offers a detailed overview and analysis of AI agent development stages, market dynamics, and governance challenges without reporting any realized harm or imminent risk from AI systems. It discusses potential risks and the need for boundaries and governance but does not describe an actual AI Incident or a specific AI Hazard event. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI evolution and associated challenges without reporting a new incident or hazard.
Thumbnail Image

OpenClaw"龙虾"能否敲开保险业大门?

2026-03-19
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes OpenClaw as an AI system with autonomous execution capabilities and high system privileges, confirming AI system involvement. It details regulatory warnings and security advisories about the potential for attackers to exploit OpenClaw's weak security to cause harm such as data breaches and illegal transaction control. However, there is no mention of actual realized harm or incidents caused by OpenClaw to date. The focus is on the plausible future risks and the need for risk management and regulatory compliance. This fits the definition of an AI Hazard, where the AI system's development and use could plausibly lead to harm, but no direct or indirect harm has yet occurred. The article also provides complementary information about broader AI adoption in insurance but the main focus is on the risk warnings related to OpenClaw. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

监管提示信托公司"小龙虾"潜在风险

2026-03-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw AI agent) and outlines multiple plausible security risks that could lead to harms such as data breaches, unauthorized control, and financial losses. Since no actual harm has occurred yet but the risks are credible and the regulatory body has issued warnings, this qualifies as an AI Hazard rather than an Incident. The event focuses on potential future harms and risk mitigation rather than realized harm or incident response.
Thumbnail Image

龙虾三兄弟"只是开始,AI Agent或许正在"杀死"传统软件?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview of technological developments, market reactions, and strategic shifts related to AI Agents. While it mentions security vulnerabilities that could lead to data breaches, no actual harm or incident has been reported. The discussion of potential security risks and structural changes in software usage constitutes a plausible future risk but not a realized incident. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm (e.g., data leaks, unauthorized control) but no direct harm has occurred according to the article.
Thumbnail Image

OpenClaw"全民养虾"时代,哪家Coding Plan最适合你?国内主流Coding Plan套餐详解_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically OpenClaw and large language models used for coding assistance. However, it focuses on describing the market, subscription plans, usage patterns, and regulatory advisories without reporting any direct or indirect harm, malfunction, or credible risk of harm. There is no mention of injury, rights violations, infrastructure disruption, or environmental harm caused or plausibly caused by these AI systems. The regulatory warnings are precautionary and do not describe an event of harm or a near miss. Therefore, the article is best classified as Complementary Information, as it provides detailed context and updates about AI system deployment and ecosystem developments without reporting an AI Incident or AI Hazard.
Thumbnail Image

小小龙虾,能否抗击全球能源危机?-黄凡的财新博客-财新网

2026-03-18
huangfan.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes OpenClaw as an AI system with advanced autonomous capabilities and discusses its role in the current economic context, particularly in relation to the global energy crisis. While it mentions potential operational security risks and regulatory concerns, these are framed as considerations or challenges rather than actual incidents or hazards causing harm. There is no indication of direct or indirect harm resulting from the AI system's use or malfunction. The discussion centers on the AI system's strategic and economic implications, investment prospects, and the evolving AI ecosystem. This aligns with the definition of Complementary Information, which includes updates and analyses that enhance understanding of AI impacts without reporting new incidents or hazards.
Thumbnail Image

财经风云--金融业的Agent内耗:一场"养虾"渴望与风险的缠斗

2026-03-17
dapenti.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) with autonomous capabilities and high system permissions used in financial institutions. It describes a concrete incident where the AI malfunctioned by deleting over 200 emails uncontrollably, demonstrating direct harm to property (data loss) and operational disruption. Regulatory bodies have issued warnings and bans due to the security risks posed by this AI system, indicating recognized harm and risk. The article also discusses the indirect harms related to potential data breaches and compliance violations. Therefore, the event qualifies as an AI Incident because the AI system's malfunction and use have directly and indirectly led to significant harms in the financial industry.
Thumbnail Image

刚装就卸!AI"小龙虾"你把握得住吗?

2026-03-19
华龙网
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and societal concerns related to the use of a high-permission AI system (OpenClaw) but does not describe any actual harm or incidents resulting from its use. The concerns about privacy and data misuse are plausible risks that could lead to harm in the future, but no direct or indirect harm has been reported. The discussion of legal challenges and the need for regulation further supports that this is about potential and ongoing governance responses rather than a specific incident. Therefore, this event is best classified as Complementary Information, as it provides context, survey data, and expert commentary on AI risks and societal responses without describing a concrete AI Incident or Hazard.