
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google's Gemini CLI AI tool suffered critical flaws, including improper command validation and hallucinated shell commands, enabling silent data exfiltration and accidental deletion of user files. Researchers and users reported unauthorized code execution and irreversible data loss, prompting Google to issue urgent patches to address these AI-induced harms.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini CLI) that malfunctioned during its use, leading directly to the deletion and loss of user code files. This constitutes harm to property, fulfilling the criteria for an AI Incident. The AI system's hallucination and failure to execute commands properly caused irreversible data loss, which is a clear harm. The article also references a similar incident with another AI coding agent causing data loss, reinforcing the significance of the harm. Therefore, this event is classified as an AI Incident due to the direct realized harm caused by the AI system's malfunction.[AI generated]