
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
OpenClaw, an open-source AI agent platform, has faced multiple security vulnerabilities, including authentication bypass and log poisoning, raising concerns about unauthorized access and malicious content injection. These risks have led major tech companies, including Meta, to ban its use over fears of privacy breaches and unpredictable AI behavior.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously executes commands and integrates with multiple services, fitting the definition of an AI system. The vulnerabilities and malicious use described have already caused realized harms such as credential theft, malware infection, and potential deep network compromises, which qualify as harm to property, communities, and violations of rights. The presence of critical vulnerabilities exploited by attackers and the spread of malicious skills demonstrate direct and indirect causation of harm. The discussion of regulatory violations further supports the classification as an AI Incident. The detailed description of realized harms and security incidents excludes classification as a hazard or complementary information.[AI generated]