AI Agent 'OpenClaw' Causes Academic Fraud and Financial Loss Amid Security Concerns in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI agent OpenClaw, widely adopted in China, has enabled academic fraud by generating papers with fabricated references and caused unexpected financial losses due to continuous operation. Its high system permissions pose significant privacy and security risks, prompting government support and regulatory scrutiny.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system 'OpenClaw' is explicitly mentioned and is used to perform complex tasks autonomously. The harm described is financial, with users incurring unexpectedly high bills due to the AI's continuous operation. This constitutes harm to individuals (economic harm), which fits within the scope of AI Incident as the AI system's use has directly led to harm (financial loss). Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Education and training

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

从惊喜到"肉疼" 专家解读一只"电子龙虾"的隐形账单

2026-03-20
China News
Why's our monitor labelling this an incident or hazard?
The AI system 'OpenClaw' is explicitly mentioned and is used to perform complex tasks autonomously. The harm described is financial, with users incurring unexpectedly high bills due to the AI's continuous operation. This constitutes harm to individuals (economic harm), which fits within the scope of AI Incident as the AI system's use has directly led to harm (financial loss). Therefore, this event qualifies as an AI Incident.
Thumbnail Image

赛博"养龙虾"何以爆火

2026-03-20
中国经济网
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenClaw) that autonomously performs complex tasks. It discusses the potential security risks and privacy issues associated with its widespread use and exposure, which could plausibly lead to harms such as privacy breaches or malicious exploitation. However, no actual harm or incident is reported as having occurred yet. Therefore, the event fits the definition of an AI Hazard, as it describes circumstances where the AI system's use could plausibly lead to harm but does not document a realized incident.
Thumbnail Image

"养龙虾"补贴,爆了

2026-03-21
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) and its development and use, with detailed discussion of government policies, subsidies, ecosystem building, and challenges. However, it does not report any actual harm or incident caused by the AI system, nor does it describe a plausible imminent harm event. Instead, it focuses on the broader AI ecosystem, policy responses, and market dynamics. This fits the definition of Complementary Information, as it provides supporting data and context about AI system deployment and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

国产"龙虾"升级为小程序接入微信,养还是不养?

2026-03-21
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous agent "龙虾") whose development and use could plausibly lead to significant harms such as theft, privacy violations, and financial loss. The article does not report any realized harm but focuses on the potential security risks and misuse scenarios, as well as expert advice and regulatory considerations. Therefore, this qualifies as an AI Hazard because the AI system's capabilities and deployment pose credible risks of harm in the near future if not properly managed.
Thumbnail Image

研究生用「龙虾」代写论文,查重 0% 以为天衣无缝,组会上被导师贴脸开大:还想毕业吗?_手机网易网

2026-03-21
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (OpenClaw and large language models) in generating academic content that includes fabricated references, leading to academic misconduct and potential harm to the integrity of research and intellectual property rights. The AI system's outputs directly contributed to the production of fraudulent academic work, which is a violation of rights and academic norms. Therefore, this qualifies as an AI Incident due to realized harm caused by AI misuse in knowledge production and academic fraud.