Chinese AI Security Firm Leaks SSL Private Key in OpenClaw-Based Product

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Qihoo 360, a major Chinese cybersecurity company, released its AI assistant '360 Security Claw' based on OpenClaw, but mistakenly included a wildcard SSL private key in the installation package. This critical error exposed users to security risks, enabling attackers to intercept and manipulate communications.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system (OpenClaw) with autonomous capabilities and integration of large language models, which can perform tasks and learn user behavior. The security risks mentioned (host takeover, data theft, misuse of permissions, malicious plugins) represent plausible scenarios where the AI system's malfunction or misuse could lead to harms such as privacy violations and unauthorized control. Since the article focuses on warning about these risks and provides a safety manual to mitigate them, but does not report any realized harm, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and hazards of the AI system's use, not on responses to past incidents or general AI ecosystem updates.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

代理型AI「養龍蝦」風險高 大陸推安全手冊 | 聯合新聞網

2026-03-17
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (OpenClaw) with autonomous capabilities and integration of large language models, which can perform tasks and learn user behavior. The security risks mentioned (host takeover, data theft, misuse of permissions, malicious plugins) represent plausible scenarios where the AI system's malfunction or misuse could lead to harms such as privacy violations and unauthorized control. Since the article focuses on warning about these risks and provides a safety manual to mitigate them, but does not report any realized harm, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and hazards of the AI system's use, not on responses to past incidents or general AI ecosystem updates.
Thumbnail Image

券商「養龍蝦」出分析報告 出錯誰負責?金管會將出手管理 | 聯合新聞網

2026-03-18
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to produce analysis reports by securities firms, indicating AI system involvement. However, no specific incident of harm or error caused by AI is reported; the discussion centers on potential risks and regulatory management. Therefore, this event represents a plausible future risk scenario (AI Hazard) rather than an actual incident. The regulatory body's proactive approach to updating guidelines and internal controls further supports this classification as a hazard rather than an incident or complementary information.
Thumbnail Image

陸發布「龍蝦」安全養殖手冊 揭AI背後暗藏4大風險

2026-03-17
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "Longxia" AI agent) and outlines multiple significant risks that could plausibly lead to AI incidents involving harm to individuals' privacy, property, and societal trust. Since the article focuses on warning users about these potential harms and advising on mitigation measures without reporting actual realized harm, it fits the definition of an AI Hazard. The detailed risk descriptions and the emphasis on plausible future harm from misuse or attack justify classifying this as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

引發資安疑慮!民眾開始「棄養龍蝦」 中國出現收費卸載服務 | 產業動態 | 財經 | NOWnews今日新聞

2026-03-17
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous capabilities and integration of large language models. The event details direct cybersecurity harms and risks caused by the system's high permissions and autonomous operation, including data theft and system takeover, which qualify as harm to persons or groups (a) and possibly harm to property or communities (d). The issuance of a security manual and emergence of paid uninstall services indicate recognized harm and response. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to realized harms and security incidents.
Thumbnail Image

龍蝦熱輝達也跟上 數發部曝我國進展、算力中心收到鴻海投資申請 | 生活焦點 | 要聞 | NOWnews今日新聞

2026-03-18
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article centers on the development and deployment of AI agent technologies and the associated cybersecurity risks, but it does not describe any actual harm or incident caused by AI systems. It mainly reports on government responses, regulatory considerations, and investment in AI infrastructure, which fits the definition of Complementary Information. There is no direct or indirect harm realized, nor a specific event indicating plausible future harm beyond general risk concerns. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

「養龍蝦」資安疑慮 官方關注

2026-03-18
UDN
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential cybersecurity risks and governance challenges posed by AI systems, with officials expressing intent to issue guidance and consider regulatory changes. There is no mention of any actual harm, malfunction, or misuse of AI systems causing injury, rights violations, or other harms. Therefore, this is a discussion of plausible future risks and governance responses, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

立委曝分析師也在「養龍蝦」!籲金管會擬安全手冊 彭金隆:盡快 | 產業動態 | 財經 | NOWnews今日新聞

2026-03-18
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw AI assistant) being used in financial trading contexts, with concerns about errors causing unintended trades (e.g., buying more shares than intended). While no actual harm has been reported, the discussion centers on the potential risks and the need for safety guidelines to prevent incidents. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (financial losses) if misused or malfunctioning. The article focuses on risk awareness and regulatory response rather than describing a realized incident or harm, so it is not an AI Incident or Complementary Information.
Thumbnail Image

科技圈瘋養龍蝦、輝達推NemoClaw!中國國安部發「安全養殖手冊」 | 國際要聞 | 全球 | NOWnews今日新聞

2026-03-17
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('OpenClaw' AI assistant) whose use has directly caused harms such as privacy leaks and accidental data deletion, fulfilling the criteria for an AI Incident. The involvement of the AI system is explicit, and the harms include violations of privacy and potential security breaches. The official issuance of a security manual by the Chinese National Security Department further confirms the recognition of these harms. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

OpenClaw引發「養龍蝦風潮」!優缺點?跟ChatGPT差在哪? | 產業動態 | 財經 | NOWnews今日新聞

2026-03-17
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI agent system capable of autonomous operation and control over computer systems. The article details realized harms related to cybersecurity vulnerabilities (e.g., exposure to hackers, malicious plugins causing key leakage), which constitute harm to property and user security. These harms arise from the AI system's use and deployment. The article also discusses the high technical barrier and risks of misuse, reinforcing the direct link between the AI system and harm. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use and associated security breaches.
Thumbnail Image

金管會關注「養龍蝦」風潮 要求金融業強化資安控管 - 自由財經

2026-03-18
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) used in financial operations that can autonomously execute commands. While no actual harm or incident has been reported, the concerns about possible errors leading to financial risks and the FSC's proactive stance to strengthen controls indicate a plausible risk of harm. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm in the financial sector if not properly managed. There is no indication of realized harm or incident, nor is the article primarily about responses or broader AI ecosystem developments, so it is not an AI Incident or Complementary Information.
Thumbnail Image

黃仁勳看準AI代理應用 談OpenClaw為「現代電腦」 | 科技 | 中央社 CNA

2026-03-19
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article focuses on the promotion and discussion of AI tools and trends, particularly OpenClaw and AI agents, without describing any event where AI caused harm or posed a credible risk of harm. There is no indication of injury, rights violations, infrastructure disruption, or other harms. The security concerns mentioned are not detailed as incidents or hazards. Therefore, the article is best classified as Complementary Information, providing context and updates on AI ecosystem developments rather than reporting an AI Incident or Hazard.
Thumbnail Image

提金融業「養龍蝦」安全手冊?金管會:內部已在研究 | 產經 | 中央社 CNA

2026-03-18
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but focuses on the regulatory body's ongoing research and consideration of safety measures to address potential risks from AI agent use in finance. This fits the definition of Complementary Information, as it provides updates on governance responses and risk management related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

金管會關切養蝦 研擬安全手冊

2026-03-18
工商時報
Why's our monitor labelling this an incident or hazard?
The AI system (OpenClaw AI agent) is explicitly mentioned and is used in financial operations with high system privileges. The article reports concrete incidents of harm (credit card theft, data leaks, file deletion) directly linked to the AI agent's operation or misuse, constituting injury to property and harm to organizations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms. The regulatory response and safety manual development are complementary but secondary to the primary incident of harm.
Thumbnail Image

養蝦資安疑慮 林宜敬:兼顧AI監理及發展

2026-03-18
工商時報
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but rather focuses on potential risks and regulatory responses related to Agentic AI. It discusses plausible future harms such as cybersecurity threats and financial fraud but does not describe any specific event where such harm has occurred. Therefore, it qualifies as an AI Hazard due to the credible risk of harm from AI agent misuse and the ongoing development of mitigation technologies and governance frameworks.
Thumbnail Image

國家安全部發布「龍蝦」安全養殖手冊

2026-03-17
AAStocks.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and discusses risks related to its use that could lead to harms such as data loss, privacy breaches, misinformation, and fraud. However, the event itself is about issuing a safety manual and guidelines to prevent these harms rather than describing an actual incident or realized harm. Therefore, it does not report an AI Incident or an immediate AI Hazard but rather provides complementary information aimed at risk mitigation and safe use of the AI system.
Thumbnail Image

周鴻禕:決定啟動360安全龍蝦全國巡裝計劃

2026-03-17
AAStocks.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI Lobster) and its deployment, but there is no indication of any harm or incident caused by the AI system. The focus is on the rollout plan and the potential of AI assistants, with some acknowledgment of challenges but no direct or indirect harm reported. Therefore, this is not an AI Incident or AI Hazard. It is not unrelated because it concerns AI deployment and its ecosystem. The article mainly provides information about the AI system's deployment and future vision, which fits the definition of Complementary Information.
Thumbnail Image

Openclaw跌落神壇!大規模解除安裝潮湧現,發生什麼事?|天下雜誌

2026-03-17
天下雜誌
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system (an AI agent) capable of autonomous operation on user devices and cloud environments, interacting with personal data and external services. The article reports multiple harms including privacy violations, data theft, unauthorized control, and security breaches caused by misuse, vulnerabilities, or malfunction of OpenClaw. Official warnings and restrictions by government bodies further confirm the recognition of these harms. Therefore, the event qualifies as an AI Incident due to realized harm to privacy and security (violations of rights and harm to communities).
Thumbnail Image

NemoClaw是什麼?看懂黃仁勳的「龍蝦」野望!|天下雜誌

2026-03-19
天下雜誌
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NemoClaw AI agents) and discusses its development, deployment, and security features. While it mentions a user complaint about the AI assistant sending unintended messages, this is presented as a concern or challenge rather than a documented incident causing harm. No direct or indirect harm to persons, property, rights, or communities is reported. The focus is on the ecosystem, technical innovation, and enterprise adoption, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

奧地利軟體高手神來一筆 引爆騰訊千人搶裝機 養龍蝦狂潮!自主AI重組商業生產力- 今周刊

2026-03-18
businesstoday.com.tw
Why's our monitor labelling this an incident or hazard?
While the article clearly involves an AI system (OpenClaw) with autonomous capabilities, it does not describe any realized harm or incident resulting from its use or malfunction. The mention of a "compute vacuum" and structural resource gaps points to potential challenges but does not constitute a direct or plausible harm event. Therefore, this is not an AI Incident or AI Hazard. The article primarily provides contextual information about a significant AI development and its adoption, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem and its evolution without reporting harm or imminent risk.
Thumbnail Image

「養龍蝦」衝擊金融資安 金管會擬強化AI指引 | 彭金隆 | 大紀元

2026-03-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI agents like OpenClaw) whose use in finance could plausibly lead to harms such as financial losses or data breaches. The article centers on the potential risks and regulatory measures to mitigate them, without describing any realized harm or incident. Therefore, this qualifies as an AI Hazard, as the AI systems' development and use could plausibly lead to incidents, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

【AI】國家安全部發布養殖手冊,讓「龍蝦」成遵規守紀「數字員工」

2026-03-17
ET Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) and discusses potential risks and recommended safety practices to prevent harm. Since no actual harm or incident has occurred but there is a credible risk of future harm if misused, this qualifies as an AI Hazard. The article serves as a warning and guidance to avoid possible AI-related harms but does not describe a realized AI Incident or a complementary information update about a past incident.
Thumbnail Image

【AI】奇安信發布「龍蝦安全伴侶」,協助政企用戶安全使用龍蝦

2026-03-17
ET Net
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of a security solution and usage guidelines designed to help users safely manage AI systems (OpenClaw intelligent agents). There is no indication that any harm or incident has occurred yet, nor that a specific AI hazard event has materialized. Instead, it is about risk mitigation and best practices, which fits the definition of Complementary Information as it provides updates and governance responses to AI-related risks.
Thumbnail Image

「龍蝦」經濟 | am730

2026-03-17
am730
Why's our monitor labelling this an incident or hazard?
The article explicitly references an AI system (OpenClaw AI Agent platform) and its rapid adoption, which is relevant to AI system involvement. However, it does not describe any realized harm or direct/indirect incidents caused by the AI system. Instead, it highlights potential safety concerns and regulatory warnings, which are noted but not linked to any actual harm. The economic impact and societal changes are described positively without evidence of harm. Thus, the article fits the definition of Complementary Information, as it provides supporting context and governance-related updates rather than reporting an AI Incident or Hazard.
Thumbnail Image

OpenClaw爆紅 國安部發布「龍蝦安全養殖手冊」 | am730

2026-03-18
am730
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) with autonomous capabilities and large language model integration. The National Security Department's issuance of a safety manual highlights recognized risks and potential harms related to the AI system's use, including data breaches and unauthorized control, which could plausibly lead to harm. However, the article does not report any actual harm or incident caused by the AI system so far; it focuses on warnings and recommended precautions to prevent such harms. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to incidents if risks are not managed, but no realized harm is described.
Thumbnail Image

【嘉賓連線】「龍蝦」能火爆多久? 剝開「龍蝦」看經濟 | 養龍蝦 | 人工智能軟件 | OpenClaw | 新唐人电视台

2026-03-18
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article centers on the emergence and popularity of an AI software and the surrounding economic and social discourse. It does not describe any direct or indirect harm caused by the AI system, nor does it report any specific event where harm has occurred or is imminent. The discussion of risks and potential bubble is speculative and does not constitute a concrete AI Hazard. Therefore, the article is best classified as Complementary Information, providing context and analysis about the AI system and its ecosystem without reporting an AI Incident or AI Hazard.
Thumbnail Image

你的「龍蝦」是否「遵規守紀」?「養蝦人」速查風險 - 香港文匯網

2026-03-17
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use and potential misuse can lead to significant harms including privacy violations, unauthorized control of devices, and dissemination of false information, which are harms to individuals and communities. Although no specific harm incident is reported as having occurred, the article clearly outlines plausible risks and hazards stemming from the AI system's capabilities and vulnerabilities. Therefore, this qualifies as an AI Hazard because it plausibly could lead to AI Incidents if the risks materialize. The article mainly serves as a warning and guidance to users about these potential harms rather than reporting an actual incident or a governance response.
Thumbnail Image

「養龍蝦」熱潮之下 如何守好安全底線? - 香港文匯網

2026-03-18
香港文匯網
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous operational capabilities. The article details multiple security vulnerabilities and incidents of data leakage risks, which constitute harm to individuals' privacy and potentially to enterprises' business secrets, aligning with violations of rights and harm to communities. The involvement of the AI system's use and its security flaws directly lead to these harms. The official risk warnings and legal analyses further confirm the realized nature of these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

獨家/中國資安巨擘「弱智級」錯誤淪笑柄 推「安全龍蝦」竟洩自家網站私鑰

2026-03-18
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI assistant "360 Security Claw" based on OpenClaw AI agent) whose development and deployment included a critical security error: leaking a wildcard SSL private key used in production. This leak directly enables attackers to intercept and manipulate user communications, leading to harm to users' security and privacy, which falls under harm to persons and communities. The AI system's malfunction (poor security controls in development and release) directly caused this harm. Therefore, this is an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, and the AI system's role is pivotal as the leaked key is tied to the AI product's infrastructure.
Thumbnail Image

學者觀點-全民養「蝦」 大陸AI產業狂潮

2026-03-18
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (OpenClaw framework and action-oriented AI) and discusses their development and use. It identifies potential risks including cybersecurity threats and data privacy issues that could plausibly lead to harm if not addressed. However, no direct or indirect harm has been reported as having occurred yet. The focus is on the emerging AI ecosystem, its challenges, and the need for policy and security responses. This fits the definition of an AI Hazard, as it describes circumstances where AI system deployment could plausibly lead to incidents, but no incident has yet materialized.
Thumbnail Image

提金融業養龍蝦安全手冊? 金管會:內部已在研究 | 金融脈動 | 金融 | 經濟日報

2026-03-18
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents used in financial trading and management tasks). However, it does not describe any realized harm or incident caused by these AI systems. Instead, it reports on regulatory authorities' internal research and consideration of how to address potential risks and ensure safety. This fits the definition of Complementary Information, as it provides updates on governance responses and ongoing assessment of AI use in finance without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

幫人裝「AI龍蝦」竟日賺10萬?OpenClaw橫掃大陸,台灣準備好了嗎? | 產業熱點 | 產業 | 經濟日報

2026-03-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously controls computer functions, fitting the definition of an AI system. It reports realized harms such as cybersecurity incidents where malicious actors exploit the AI system's capabilities or its ecosystem to steal bank accounts, personal privacy, and corporate secrets, which constitute harm to persons and property. These harms have already occurred, not just potential risks, thus qualifying as an AI Incident. The article also discusses societal and policy responses, but the primary focus is on the realized harms caused by the AI system's use and misuse.
Thumbnail Image

陸官方發「養龍蝦」安全手冊 提醒防範四大風險 | 大陸政經 | 兩岸 | 經濟日報

2026-03-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) and its autonomous capabilities, including integration with large language models and remote execution of user commands. The security manual warns about multiple plausible risks that could lead to incidents such as data breaches, unauthorized control, or misuse of the AI system. However, the article does not report any realized harm or incident resulting from the AI system's use or malfunction. Therefore, this event fits the definition of an AI Hazard, as it concerns credible potential harms from the AI system's use and development but does not describe an actual AI Incident or complementary information about responses to past incidents.
Thumbnail Image

黃仁勳大讚OpenClaw 「龍蝦」是下一個ChatGPT | 產業熱點 | 產業 | 經濟日報

2026-03-18
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview and endorsement of OpenClaw's capabilities and potential impact, including a caution about security and compliance risks. There is no description of realized harm or incidents caused by the AI system, only a mention of plausible risks if mismanaged. Therefore, this qualifies as Complementary Information, as it offers context, expert opinion, and discussion of potential risks without reporting a specific AI Incident or AI Hazard event.
Thumbnail Image

OpenClaw爆紅 陸國安部發布「龍蝦」安全養殖手冊 | 大陸政經 | 兩岸 | 經濟日報

2026-03-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) with autonomous capabilities and potential security vulnerabilities. The National Security Department's safety manual explicitly warns about plausible risks that could lead to harm, such as data breaches and unauthorized control, but no actual harm or incident is reported. Therefore, this constitutes an AI Hazard, as the development and use of the AI system could plausibly lead to incidents involving harm if the risks are realized. The article is not merely general AI news or a product announcement, but a formal warning about potential risks, which fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

券商「養龍蝦」做投資報告 金管會緊盯!加推安全手冊杜絕風險 | ETtoday財經雲 | ETtoday新聞雲

2026-03-18
ETtoday財經雲
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw AI assistant) used in financial services, with concerns about its use potentially leading to harm such as incorrect investment decisions. The regulatory body is studying the issue and planning safety measures. Since no actual harm has occurred but there is a plausible risk of harm from the AI system's use, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential risks are central to the report.
Thumbnail Image

OpenClaw | 國安部發布「龍蝦」安全養殖手冊:遵循最小權限原則

2026-03-17
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The content involves an AI system (OpenClaw) and discusses risks and security practices to prevent harm, which aligns with the concept of an AI Hazard or Complementary Information. Since no actual harm or incident is reported, and the main focus is on guidelines and risk mitigation rather than a specific event of harm or near-harm, the article is best classified as Complementary Information. It provides important context and governance-related advice to manage AI risks but does not describe an AI Incident or an immediate AI Hazard event.
Thumbnail Image

AI 新寵:潮養龍蝦 AI發展分水嶺 生產力大洗牌

2026-03-17
wealth.hket.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (AI agents) and their capabilities, confirming AI system involvement. However, it does not describe any event where the AI's development, use, or malfunction has directly or indirectly caused harm (physical, legal, social, or environmental). Nor does it describe a plausible risk of harm occurring imminently. Instead, it provides an overview of AI's evolving role and potential societal impact, along with advice on safe and responsible use. This fits the definition of Complementary Information, as it enhances understanding of AI's ecosystem and implications without reporting a new incident or hazard.
Thumbnail Image

十八彎/人人「養蝦」的春天\關 爾 - 大公文匯網

2026-03-16
大公报
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (the "Lobster" AI agent) and discusses its use and societal implications, including privacy concerns and identity confusion. However, it does not describe any realized harm or direct/indirect incident resulting from the AI's development, use, or malfunction. Nor does it specify a credible imminent risk of harm. The content is more of a cultural and philosophical reflection on AI's role and future, without concrete evidence of harm or a specific hazard event. Therefore, it fits best as Complementary Information, providing context and societal perspective rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

你的"龍蝦"是否"遵規守紀"?"養蝦人"速查風險

2026-03-17
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by the AI system but outlines multiple credible risks and potential harms that could plausibly occur if the AI system is misused, compromised, or malfunctions. These include privacy breaches, unauthorized control of devices, misinformation spread, and data loss. Therefore, the event fits the definition of an AI Hazard, as it describes circumstances where the AI system's development and use could plausibly lead to harm, but no actual harm has yet occurred or been reported.
Thumbnail Image

你的"龍蝦"是否"遵規守紀"?"養蝦人"速查風險

2026-03-17
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system ('OpenClaw'/'Lobster') with autonomous capabilities and high privileges, which can be reasonably inferred as an AI system per the definitions. It discusses the potential for harm through misuse or malfunction, including data loss, privacy breaches, unauthorized control, and misinformation dissemination. No actual harm is reported, but the credible risks described meet the criteria for an AI Hazard, as these risks could plausibly lead to AI Incidents if realized. The article also provides guidance to users to mitigate these risks, but the main focus is on the potential hazards rather than a realized incident or a governance response. Hence, the classification is AI Hazard.
Thumbnail Image

AI智慧體"龍蝦"火爆出圈 六個"蝦"操作要注意!

2026-03-16
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, the "Lobster" AI agent, which is deployed and used by various users including government entities and the public. It describes concrete harms such as personal information leakage, financial losses, and security vulnerabilities caused by malicious or fake versions of the AI system and its skills. These harms fall under injury to persons (financial harm), violations of rights (privacy breaches), and harm to communities (security risks). The article also discusses the system's development and deployment, as well as misuse risks. Since actual harms have occurred or are ongoing, and the AI system's role is pivotal, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

輝達推「NemoClaw」是什麼?與OpenClaw的差異、優勢與安裝教學全攻略│TVBS新聞網

2026-03-17
TVBS
Why's our monitor labelling this an incident or hazard?
The content is primarily an informative overview and tutorial about a new AI system (NemoClaw) and its improvements over a predecessor (OpenClaw). It discusses potential security risks in OpenClaw and how NemoClaw mitigates them, but no actual harm or incident is reported. There is no indication of realized harm or a credible imminent risk of harm from the use or malfunction of these AI systems. The article also includes installation instructions and FAQs, which are typical of complementary information. Therefore, the article fits the category of Complementary Information, providing context and updates about AI system development and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

國安部發布「龍蝦」安全養殖手冊 籲檢查風險隱患

2026-03-17
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw, an open-source AI intelligent agent) whose use and potential misuse can lead to significant harms such as privacy violations, misinformation dissemination, fraud, and unauthorized control of devices. Although no specific harm has been reported as having occurred yet, the warnings and risk descriptions indicate plausible future harms directly linked to the AI system's development and use. Therefore, this event qualifies as an AI Hazard because it highlights credible risks that could plausibly lead to AI Incidents if not properly managed.
Thumbnail Image

08:27:42【熱門話題】「養龍蝦」須戴好手套

2026-03-18
hkcd.com
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous task execution capabilities, fitting the definition of an AI system. The article reports realized harms caused by its insecure default settings and improper user configurations, such as credit card theft and system compromise, which are direct harms to users and potentially critical infrastructure. The involvement of the AI system's use and malfunction in causing these harms meets the criteria for an AI Incident. The article also includes risk advisories and mitigation recommendations, but the primary focus is on the realized harms and security incidents linked to the AI system's deployment and use.
Thumbnail Image

12:46:25國家安全部發《「龍蝦」安全養殖手冊》

2026-03-17
hkcd.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenClaw) with autonomous operational capabilities and high privileges, which could lead to serious harms including privacy violations, security breaches, and misinformation. Although no actual harm has been reported yet, the detailed warnings and risk scenarios demonstrate credible potential for AI-related incidents. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to AI Incidents if the risks materialize, and the main focus is on raising awareness and providing safety guidance to prevent such harms.
Thumbnail Image

中國國安部發布"龍蝦"安全養殖手册

2026-03-17
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system ('OpenClaw'/'Lobster') with autonomous capabilities and high system privileges, which can perform tasks including remote execution and proactive actions. The Ministry of State Security's publication of a safety manual focusing on potential risks such as unauthorized control, data theft, misinformation, and exploitation of vulnerabilities indicates credible and plausible future harms. No actual harm or incident is described as having occurred yet, but the warnings and risk factors meet the criteria for an AI Hazard. The event is not a report of an incident, nor is it merely complementary information or unrelated news, but a clear advisory about plausible risks from the AI system's use and misuse.
Thumbnail Image

中國國安部發布"龍蝦"安全養殖手冊  10:33

2026-03-17
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and details multiple harms linked to its use, including data loss, privacy breaches, unauthorized control of devices, and misinformation spread. These harms correspond to injury to property, harm to communities, and violations of rights. The presence of thousands of vulnerable instances worldwide and the description of actual risks and incidents indicate that harm is occurring or has occurred, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「養龍蝦」衝擊金融資安 金管會擬強化AI指引

2026-03-18
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system with autonomous capabilities that can directly operate computer tasks, including executing trading orders. The article raises concerns about potential errors (e.g., placing multiple unintended stock orders) and data security breaches, which could cause harm to financial institutions and their clients. The regulatory response to strengthen guidelines indicates recognition of these plausible risks. Since no actual harm is reported but the risks are credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

科技業瘋養龍蝦!立委憂潛在風險 金管會研議應用指引 | yam News

2026-03-18
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use in finance could plausibly lead to harms such as transaction errors or misleading investment reports, which could cause financial or legal harm. However, the article only describes concerns and ongoing research without any realized harm or incident. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no harm has yet occurred.
Thumbnail Image

輝達版龍蝦「NemoClaw」 AI代理添增安全性

2026-03-18
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and its use as an AI agent executing tasks. It discusses potential cybersecurity and operational risks, especially in financial and governmental contexts, implying plausible future harm if not properly managed. However, no direct or indirect harm has occurred yet, and the focus is on risk awareness, regulatory monitoring, and safety enhancements. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as it centers on plausible future harm rather than realized harm or responses to past incidents.
Thumbnail Image

AI代理人「龍蝦」爆紅 數發部:須防範資安風險

2026-03-18
公共電視
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) with autonomous task execution capabilities, which inherently carries cybersecurity and financial risks. The article emphasizes the plausible future harms from misuse or malfunction of such AI agents, such as unauthorized financial transactions and security vulnerabilities. It also details governmental awareness and preparatory measures to mitigate these risks. Since no actual harm has occurred yet but credible risks are identified, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

這隻有毒!黃仁勳加入小龍蝦宇宙 KPMG示警「恐集體做傻事」 | 財經 | 三立新聞網 SETN.COM

2026-03-17
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) and their use, focusing on the risks and potential harms that could arise from their deployment and misuse. Although no concrete incident of harm is reported, the described vulnerabilities and hacker activities indicate a credible risk of harm such as data theft, financial loss, and privacy violations. The warnings and expert advice about these risks align with the definition of an AI Hazard, as the AI systems' development and use could plausibly lead to an AI Incident. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the risks and potential harms rather than updates or responses to past incidents.
Thumbnail Image

獨家/中國「養龍蝦」出事了 百億資安巨擘「弱智級」錯誤淪國際笑柄 - 民視新聞網

2026-03-18
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI assistant "360 Security Claw" based on OpenClaw AI agent) whose deployment included a critical security flaw (exposed wildcard SSL private key). This flaw directly enables attackers to intercept and manipulate user communications, leading to harm such as privacy violations and potential breaches of user rights. The harm is realized and ongoing, as users remain exposed despite mitigation efforts. The AI system's use and deployment are central to the incident, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

「養龍蝦」熱潮背後藏風險 中國國安部發布OpenClaw安全養殖手冊 | ETtoday AI科技 | ETtoday新聞雲

2026-03-17
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose development and use could plausibly lead to significant harms such as privacy violations, financial loss, unauthorized system control, and misinformation spread. However, the article does not report any realized harm or incident but rather warns about potential risks and advises on mitigation strategies. Therefore, this qualifies as an AI Hazard because it describes credible future risks stemming from the AI system's capabilities and vulnerabilities, without evidence of actual harm occurring yet.
Thumbnail Image

科技圈瘋養「龍蝦」 KPMG:當心一群聰明AI一起做傻事 | ETtoday財經雲 | ETtoday新聞雲

2026-03-16
ETtoday財經雲
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) whose use has directly led to harms such as financial loss (excessive billing), cybersecurity breaches (theft of API keys, remote control of computers), and potential data theft and espionage. These harms fall under categories of harm to property, communities, or environment (d) and violations of rights (c). The discussion of actual incidents of AI agents causing harm and the risks of their misuse or malfunction qualifies this as an AI Incident. The article also includes complementary information about governance and mitigation but the primary focus is on realized harms and risks from AI agent deployment.
Thumbnail Image

科技圈瘋「養龍蝦」!代理AI留意4大風險 最怕「聰明AI一起做傻事」|壹蘋新聞網

2026-03-17
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) that autonomously perform tasks and interact with enterprise systems. It describes realized cybersecurity harms such as data theft, unauthorized system control, and financial losses due to misuse of AI agents. These harms fall under violations of rights and harm to property and communities. The risks are not hypothetical but are already occurring or have occurred, such as hackers stealing API keys and controlling devices. Therefore, this event qualifies as an AI Incident because the development and use of AI agents have directly or indirectly led to significant harms, including security breaches and potential operational disruptions. The article also discusses governance and mitigation but the primary focus is on the harms and risks already manifesting.
Thumbnail Image

Openclaw和AI代理熱潮拆解:從「養龍蝦」到「卸龍蝦」

2026-03-17
yahoo-news.com.hk
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system explicitly described as an AI agent that autonomously operates computer systems and interacts with AI models. The article details multiple harms that have already occurred, including sensitive data theft, unauthorized system control, and accidental data deletion, which constitute violations of privacy and security (human rights and harm to property). The involvement of AI in these harms is direct, as the AI agent's capabilities and vulnerabilities are central to the incidents. Government warnings and restrictions further confirm the recognition of these harms. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

輝達GTC 3/輝達進軍太空資料中心 黃仁勳推龍蝦代理平台

2026-03-17
mnews.tw
Why's our monitor labelling this an incident or hazard?
The article discusses the development and promotion of AI agent technology and space computing infrastructure by NVIDIA, highlighting new tools and future ambitions. However, it does not report any actual harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. The mention of security frameworks and radiation certification indicates proactive measures rather than incidents or hazards. The content is primarily informative about AI ecosystem evolution and corporate initiatives, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

必學!「養龍蝦」爆 4 大安全陷阱 國家安全部教 OpenClaw 3 大防範措施

2026-03-17
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) and discusses its use and associated security risks that could plausibly lead to harms such as data loss, privacy breaches, misinformation, and financial damage. Since the harms are potential and preventive measures are recommended to avoid these harms, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risks and prevention of harm, not on responses to past incidents or general AI ecosystem updates. Therefore, the classification is AI Hazard.
Thumbnail Image

黃仁勳稱OpenClaw為「現代電腦完美體現」 開源模型應用助產業受惠|人工智能AI (11:56) - 20260319 - 熱點

2026-03-19
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview of AI developments, industry perspectives, and Nvidia's response to security concerns related to OpenClaw. It does not report any realized harm or incidents caused by the AI system, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it focuses on the potential and governance measures for AI agents, making it complementary information that supports understanding of AI ecosystem developments and responses.
Thumbnail Image

輝達GTC設龍蝦吧 黃仁勳讚OpenClaw「現代電腦完美體現」│TVBS新聞網

2026-03-19
TVBS
Why's our monitor labelling this an incident or hazard?
The article focuses on the positive reception and strategic importance of OpenClaw and related AI technologies at NVIDIA's GTC event, including discussions on AI agent capabilities and security principles. While it mentions security concerns, it does not detail any realized harm or specific AI incidents. The content is primarily informative about AI ecosystem developments and industry perspectives, without describing an AI incident or hazard. Therefore, it fits the definition of Complementary Information, providing supporting context and updates about AI systems and their ecosystem rather than reporting a new incident or hazard.
Thumbnail Image

台稱AI代理工具勢融入公務體系 惟短期需審慎評估風險

2026-03-19
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article primarily addresses the potential risks and regulatory responses related to AI agent tools, without describing any actual harm or incident caused by AI systems. It emphasizes the need for balanced governance and risk assessment but does not report any direct or indirect harm resulting from AI use. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to harm if risks are not managed, but no harm has yet occurred. It is not Complementary Information since it is not updating or responding to a past incident, nor is it unrelated as it clearly involves AI systems and their governance.
Thumbnail Image

中國瘋Openclaw「養龍蝦」二手Mac電腦價格飆漲 搶著換新機 | 國際要聞 | 全球 | NOWnews今日新聞

2026-03-19
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously executes tasks on personal computers. The article mentions security risks that could plausibly lead to harm, such as unauthorized data access and hacking, but does not describe any realized incidents of harm or breaches. The increased demand for second-hand Macs is an economic effect rather than a harm. Therefore, the event describes a plausible risk of harm from AI use but no actual harm has occurred yet, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

財經青紅燈》野蠻的中國「龍蝦」 - 自由財經

2026-03-19
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenClaw AI agent framework and Tencent's SkillHub plugin platform) and their development and use. Tencent's scraping and copying of open-source AI content without permission is a misuse of AI development resources and intellectual property. However, the article does not describe any direct or indirect harm such as injury, rights violations, or disruption caused by this action. The harm is primarily ethical and competitive, not a legally recognized or materialized AI Incident. Nor does it describe a plausible future harm scenario beyond the general critique of industry practices. Therefore, this is best classified as Complementary Information, providing context and critique about AI industry behavior and challenges rather than reporting a specific AI Incident or Hazard.
Thumbnail Image

養龍蝦太燒錢!殺蝦潮爆發?為何黃仁勳喊每家都要養

2026-03-19
工商時報
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (OpenClaw AI agent) whose use has led to direct financial harm to users through excessive token consumption and runaway loops causing unexpectedly high bills. The harm is realized and linked to the AI system's malfunction or misuse (lack of proper control mechanisms). The article also discusses the broader implications and strategic importance of the AI system but the core event is the financial harm caused by the AI agent's operation. This fits the definition of an AI Incident as the AI system's use has directly led to harm (financial loss) to individuals and organizations. It is not merely a potential risk or complementary information but a concrete incident of harm.
Thumbnail Image

共創共享/「龍蝦熱」對企業家的三大啟示\戈峻 - 大公文匯網

2026-03-19
大公报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use has directly led to realized harms including information security breaches, unauthorized remote control of devices, financial losses, and data destruction. These harms fall under harm to property, communities, or environment (d) and possibly breach of obligations under applicable law (c). The AI system's malfunction or misuse has caused these harms, making this an AI Incident. The article does not merely discuss potential risks or responses but reports actual harms and security incidents linked to the AI system's use.
Thumbnail Image

AI龍蝦易裝難養 宜審慎授權(林國誠) - EJ Tech

2026-03-20
EJ Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents like OpenClaw) and discusses their use and potential misuse. It does not report any realized harm but warns about plausible future harms including security vulnerabilities (e.g., ClawJacked exploit), privacy breaches, operational disruptions, and financial losses. These risks stem from the AI system's use and potential malfunction or misconfiguration. Since the harms are potential and not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. The article also provides recommendations for risk mitigation, but its main focus is on the plausible risks rather than responses to past incidents, so it is not Complementary Information.
Thumbnail Image

中國掀「龍蝦棄養潮」!OpenClaw裝機賺一筆、解除安裝也成為熱門生意了?

2026-03-20
數位時代
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system enabling autonomous AI agents. The article reports actual harms resulting from its use, including data leakage risks, deletion of important emails, and security vulnerabilities, which have prompted government warnings and restrictions. The harms are realized and significant, involving information security breaches and potential loss of data, which align with harm to property and violation of rights. The event involves the use and malfunction of the AI system leading to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

俯瞰東北亞/Meta高層遭殃! 「AI助理」暴走狂刪郵件 揭OpenClaw風險│TVBS新聞網

2026-03-20
TVBS
Why's our monitor labelling this an incident or hazard?
OpenClaw is explicitly described as an AI system with autonomous execution capabilities. The incident involving the Meta executive's emails being deleted is a direct harm caused by the AI's malfunction and misjudgment, despite safety constraints. The security breach exposing millions of API keys and private emails is another direct harm linked to the AI system's development and use. These harms fall under data loss and privacy breaches, which are significant harms to property and potentially human rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

百度辦「龍蝦」市集宣傳OpenClaw 官方示警有風險仍湧千人下載│TVBS新聞網

2026-03-20
TVBS
Why's our monitor labelling this an incident or hazard?
The AI system OpenClaw is explicitly involved and its use is widespread as shown by the large number of downloads. The official government warnings about risks such as cyberattacks and data leaks indicate credible potential harms that could arise from the AI system's use. Since no actual harm or incident is reported, but the risk is clearly stated and plausible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk and widespread use despite warnings, not on responses or updates to past incidents.
Thumbnail Image

百度辦「龍蝦」市集宣傳OpenClaw 官方示警有風險仍湧千人下載│TVBS新聞網

2026-03-20
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and official warnings about security risks, indicating potential for harm such as cyberattacks and data leaks. However, no actual incidents or harms have been reported as occurring. The event thus fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no direct or indirect harm has yet materialized. It is not Complementary Information because the main focus is on the risk and public response, not on updates or responses to a past incident. It is not an AI Incident because no harm has been realized. It is not Unrelated because the AI system and its risks are central to the event.