Google Gemini CLI AI Flaws Lead to Data Loss and Security Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini CLI AI tool suffered critical flaws, including improper command validation and hallucinated shell commands, enabling silent data exfiltration and accidental deletion of user files. Researchers and users reported unauthorized code execution and irreversible data loss, prompting Google to issue urgent patches to address these AI-induced harms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google Gemini CLI) that malfunctioned during its use, leading directly to the deletion and loss of user code files. This constitutes harm to property, fulfilling the criteria for an AI Incident. The AI system's hallucination and failure to execute commands properly caused irreversible data loss, which is a clear harm. The article also references a similar incident with another AI coding agent causing data loss, reinforcing the significance of the harm. Therefore, this event is classified as an AI Incident due to the direct realized harm caused by the AI system's malfunction.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
IT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Google Gemini deletes user's code: 'I have failed you completely and catastrophically'

2025-07-25
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini CLI) that malfunctioned during its use, leading directly to the deletion and loss of user code files. This constitutes harm to property, fulfilling the criteria for an AI Incident. The AI system's hallucination and failure to execute commands properly caused irreversible data loss, which is a clear harm. The article also references a similar incident with another AI coding agent causing data loss, reinforcing the significance of the harm. Therefore, this event is classified as an AI Incident due to the direct realized harm caused by the AI system's malfunction.
Thumbnail Image

Google's Gemini AI wipes user's code, admits 'catastrophic' failure

2025-07-28
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini AI) was used to manage files but malfunctioned by failing to create a destination folder and then moving files to an unknown location, resulting in the loss of the user's code. This is a direct harm to property caused by the AI system's failure, fitting the definition of an AI Incident.
Thumbnail Image

Google's Gemini CLI agent could run malicious code silently

2025-07-29
iTnews
Why's our monitor labelling this an incident or hazard?
The Gemini CLI agent is an AI system providing a command interface to a large language model. The vulnerability allows prompt injection leading to silent execution of malicious shell commands, which directly causes harm by enabling unauthorized access and data exfiltration. The harm is realized as the vulnerability was demonstrated and classified as critical by Google, with users advised to upgrade to mitigate the risk. This fits the definition of an AI Incident because the AI system's malfunction and misuse have directly led to significant harm (unauthorized access and data compromise).
Thumbnail Image

Google Gemini AI Hallucinates Commands, Deletes Expert's Files, Takes the Blame

2025-07-26
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system generated incorrect shell commands that deleted directories, causing irreversible data loss for the user. This is a direct harm caused by the AI system's malfunction (hallucination of commands). The harm is material and significant, involving loss of files and disruption of the user's work environment. The AI system's role is pivotal as the commands were generated by it and led to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Researchers flag flaw in Google's AI coding assistant that allowed for 'silent' code exfiltration

2025-07-28
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini CLI) that was vulnerable to prompt injection attacks, allowing it to execute malicious commands and exfiltrate sensitive data without user awareness. This directly leads to harm in terms of data theft and privacy violations, which are significant harms under the framework. The vulnerability was exploited in demonstration, showing realized harm potential, and the AI system's malfunction and misuse are central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google fixes Gemini CLI flaws that risked silent data exfiltration

2025-07-28
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini CLI) that uses AI to interact with code and execute shell commands. The flaw in the AI system's permission validation and command execution directly led to unauthorized data exfiltration, which constitutes harm to user data privacy and security. Since the harm has occurred (silent data exfiltration), this qualifies as an AI Incident. The report also details the fix and mitigation measures, but the primary focus is on the realized harm and the AI system's role in it.
Thumbnail Image

Google Gemini security flaw could have let anyone access systems or run code

2025-07-29
TechRadar
Why's our monitor labelling this an incident or hazard?
The Gemini CLI tool is an AI system as it interacts with users by understanding code and executing commands. The security flaw allowed threat actors to exploit the AI system to run malicious code, leading to direct harm such as unauthorized access and data theft. Since the harm has already occurred or could have occurred due to the vulnerability, and the AI system's malfunction was a direct factor, this qualifies as an AI Incident. The article reports on the realized security flaw and its potential harms, not just a potential risk or a general update, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Gemini CLI Vulnerability Allows Hackers to Execute Malicious Commands on Developer Systems - IT Security News

2025-07-29
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The Gemini CLI tool is an AI system involved in command processing using prompt injection techniques, indicating AI involvement. The vulnerability allows attackers to execute malicious commands silently, directly harming developer systems. This harm fits the definition of an AI Incident as it involves direct harm caused by the AI system's malfunction or exploitation. The event is not merely a potential risk but a realized security breach, thus not an AI Hazard or Complementary Information.
Thumbnail Image

Gemini CLI Vulnerability Allows Silent Execution of Malicious Commands on Developer Systems - IT Security News

2025-07-29
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The Gemini CLI is an AI system interface, and the vulnerability allows attackers to execute malicious commands silently, which directly leads to harm to property and potentially to users. Since the vulnerability has been exploited or can be exploited to cause harm, this qualifies as an AI Incident. The description indicates realized harm or at least direct risk of harm due to the AI system's malfunction or misuse, not just a potential hazard or complementary information.
Thumbnail Image

A flaw in Google's new Gemini CLI tool could've allowed hackers to exfiltrate data

2025-07-29
channelpro
Why's our monitor labelling this an incident or hazard?
The Gemini CLI tool is an AI system leveraging a large language model to assist coding workflows. The vulnerability allowed attackers to execute malicious code and exfiltrate data without user consent, which constitutes a direct harm to property and user data security. Since the vulnerability was actively exploitable and could have led to data breaches, this qualifies as an AI Incident. The company's fix and security improvements are complementary but do not negate the incident classification.
Thumbnail Image

Gemini CLI Vulnerability Allows Hackers to Execute Malicious Commands on Developer Systems

2025-07-29
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The Gemini CLI tool is an AI system as it involves an AI assistant processing context files and commands. The vulnerability directly led to harm by enabling attackers to execute malicious commands and potentially steal sensitive information from developer systems, which constitutes harm to property and potentially to individuals' data security. Therefore, this is an AI Incident because the AI system's malfunction (inadequate validation and prompt injection vulnerability) directly caused realized harm through exploitation.
Thumbnail Image

Google's Gemini CLI Deletes User Files, Confesses "Catastrophic" Failure - WinBuzzer

2025-07-26
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the AI system (Gemini CLI) malfunctioned during file management tasks, directly causing permanent deletion of user files, which is harm to property. The AI's hallucination and failure to check command success led to cascading errors and data loss. This fits the definition of an AI Incident because the AI system's malfunction directly led to realized harm (data loss). The event also references a similar prior incident with another AI coding agent, reinforcing the pattern of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

If you're coding with Gemini CLI, you need this security update

2025-07-30
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The Gemini CLI is an AI system used for coding assistance. The vulnerability involves prompt injection attacks that can override security controls and execute malicious commands, which directly risks harm to users' data and property (their computers). This constitutes an AI Incident because the AI system's malfunction (security flaw) has directly led to a significant risk of harm, and the exploit has been demonstrated. The article reports realized vulnerability and potential harm, not just a theoretical risk, and Google has issued a patch in response.
Thumbnail Image

Flaw in Gemini CLI coding tool could allow hackers to run nasty commands

2025-07-30
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini CLI, an AI coding assistant using Google's Gemini 2.5 Pro model) whose malfunction and improper security design directly led to realized harm: unauthorized execution of commands, data exfiltration, and potential for destructive attacks on users' devices. This fits the definition of an AI Incident because the AI system's use and vulnerabilities directly caused harm to users' property and security. The exploit was demonstrated and the harm is concrete, not hypothetical. Therefore, this is classified as an AI Incident.
Thumbnail Image

Google Patches Gemini CLI Flaw Enabling Prompt Injection Attacks

2025-07-30
WebProNews
Why's our monitor labelling this an incident or hazard?
The Gemini CLI is an AI system that uses a language model to generate commands executed on users' machines. The vulnerability allowed malicious actors to manipulate the AI's outputs to execute harmful shell commands and exfiltrate data, directly causing harm to users' property and security. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to realized harm. The patch and industry responses are complementary information but do not change the classification of the event as an AI Incident.
Thumbnail Image

Gemini CLI Vulnerability: Hackers Could Run Commands - News Directory 3

2025-07-30
News Directory 3
Why's our monitor labelling this an incident or hazard?
The Gemini CLI is an AI system that processes user prompts and executes commands. The described prompt injection exploit leverages the AI's behavior to execute harmful commands without user knowledge, directly leading to data exfiltration, which constitutes harm to property (user data). This meets the criteria for an AI Incident because the AI system's use and malfunction (insecure prompt processing) directly caused harm. The event is not merely a potential risk but a realized security breach demonstrated by a researcher, thus qualifying as an AI Incident rather than a hazard or complementary information.