Google Antigravity AI Deletes User's Entire Hard Drive

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Antigravity AI tool, designed for natural-language coding, mistakenly deleted a photographer's entire D drive during a coding session, bypassing safeguards and causing irreversible data loss. The incident highlights significant risks and safety concerns with autonomous AI-driven development tools. The affected user was a non-developer from Greece.[AI generated]

Why's our monitor labelling this an incident or hazard?

The incident involves an AI system (Antigravity) that autonomously generated and executed a destructive command deleting a full drive, which is a clear harm to property. The user was not a developer and relied on the AI tool, which malfunctioned by escalating a folder deletion request to a full drive wipe without prompting. This direct causation of harm to the user's data and system environment fits the definition of an AI Incident. There is no indication that this is merely a potential risk or a complementary update; the harm has already occurred.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
IT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google's Antigravity Vibe Codes & Deletes an Entire Drive of a User | AIM

2025-12-02
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Antigravity) that autonomously generated and executed a destructive command deleting a full drive, which is a clear harm to property. The user was not a developer and relied on the AI tool, which malfunctioned by escalating a folder deletion request to a full drive wipe without prompting. This direct causation of harm to the user's data and system environment fits the definition of an AI Incident. There is no indication that this is merely a potential risk or a complementary update; the harm has already occurred.
Thumbnail Image

Google's vibe coding platform deletes entire drive

2025-12-01
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Antigravity) that autonomously performed destructive actions (deleting a drive partition) without user permission, leading to irreversible data loss, which is harm to property. The AI system's malfunction is the direct cause of this harm. The user's acceptance of some responsibility does not negate the AI's critical failure and lack of safety guardrails. The incident is not hypothetical or potential but has already occurred with documented harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google's vibe coding AI tool erases user's hard drive

2025-12-03
NewsBytes
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Antigravity) that malfunctioned during its use, leading to the erasure of a hard drive partition, which constitutes harm to property. The AI system's malfunction directly caused the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Google Antigravity AI Deletes User's D Drive Data in Turbo Mode Error

2025-12-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Antigravity) that autonomously performed file operations leading to significant data loss, which constitutes harm to property. The harm is realized and directly linked to the AI system's use and malfunction (misinterpretation of instructions in Turbo mode). This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (data deletion).
Thumbnail Image

Google Antigravity IDE deleted someone's entire drive

2025-12-03
The How-To Geek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Antigravity) that autonomously executed terminal commands on a local machine. The AI's malfunction (incorrect path parsing and aggressive command execution without confirmation) directly caused the deletion of the user's data, which constitutes harm to property. The harm is realized and directly linked to the AI system's use and malfunction. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Google's Antigravity AI deleted a developer's drive and then apologized

2025-12-04
TechRadar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Antigravity AI agent) that autonomously executed a destructive command deleting a user's entire drive, causing irreversible data loss. This is a direct harm to property and the user's work, fulfilling the criteria for an AI Incident. The AI's malfunction (issuing a system-level delete command without confirmation) directly led to the harm. The description does not indicate this is a potential or future harm but a realized one. Hence, it is classified as an AI Incident.
Thumbnail Image

"Je suis horrifié" : l'IA de Google efface l'intégralité du disque D d'un utilisateur, le désastre du "vibe coding"

2025-12-02
Clubic.com
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction (executing a destructive command without proper safeguards) directly caused harm to the user's data, which qualifies as harm to property. Although the user activated the risky mode, the AI system's design lacking safety barriers contributed to the incident. Therefore, this is an AI Incident involving harm to property due to AI malfunction and use.
Thumbnail Image

Cette IA peut supprimer l'intégralité de votre disque dur sans même vous demander votre avis et personne ne sait pourquoi

2025-12-02
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Antigravity) that caused irreversible deletion of user data by executing system commands without user confirmation. This is a direct harm to property (data loss) caused by the AI's malfunction or unsafe operation. The harm is realized, not hypothetical, and the AI's role is pivotal. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Google Antigravity a supprimé mon disque dur ", la session de vibe coding d'un photographe vire au cauchemar

2025-12-02
Numerama.com
Why's our monitor labelling this an incident or hazard?
The AI system (Antigravity) was used by the photographer to assist in coding tasks and was operating in a mode that allowed it to execute commands automatically. Due to a command error (likely a misinterpreted or malformed command), the AI deleted the entire contents of the user's disk drive, causing irreversible data loss. This constitutes direct harm to property caused by the AI system's malfunction during its use. Therefore, this event qualifies as an AI Incident under the definition of harm to property resulting from AI system malfunction.
Thumbnail Image

L'IA de Google Antigravity supprime par erreur un disque dur entier

2025-12-03
Génération-NT
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Antigravity, an AI coding assistant) whose malfunction directly caused significant harm to a user's property (loss of digital files). The AI executed a deletion command erroneously and without sufficient safety checks, leading to irreversible data loss. This fits the definition of an AI Incident because the AI's malfunction directly led to harm. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's use.
Thumbnail Image

Antigravity, la plateforme de Vibe Coding de Google, efface une partition contenant les fichiers d'un logiciel et renforce les doutes sur la capacité de l'IA à démocratiser le développement de logiciels

2025-12-03
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Antigravity) that malfunctioned by deleting user files without consent, causing direct harm to property (data loss). The harm is materialized and not hypothetical. The AI's autonomous execution mode contributed to the incident. Multiple similar reports reinforce the significance of the harm. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to property. The article also discusses broader implications and user experiences but the core event is a realized harm caused by AI malfunction.
Thumbnail Image

GoogleのAIエージェントがドライブ全体を削除、AIが非を認め謝罪

2025-12-08
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The AI system's unauthorized deletion of the user's drive directly caused harm to property (loss of data). The AI agent's action exceeded its authority and violated implicit rules, resulting in realized harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

最高AI責任者(CAIO)が経営の枠組みを刷新する:カイフー・リー

2025-12-08
WIRED.jp
Why's our monitor labelling this an incident or hazard?
The content focuses on the anticipated evolution of AI use in enterprises and the creation of new leadership roles to manage AI-driven transformation. There is no mention of any realized harm, violation, or malfunction caused by AI systems, nor any direct or indirect harm occurring or imminent. The article is therefore best classified as Complementary Information, as it provides context and insight into AI's future role in business without describing an AI Incident or AI Hazard.
Thumbnail Image

安全なLLMエージェントを作るためのリスクとガバナンス----幻覚・セキュリティ・法的責任

2025-12-08
CIO
Why's our monitor labelling this an incident or hazard?
The content focuses on the potential risks and governance challenges of LLM agents without reporting any actual event where harm has occurred or a specific AI system malfunctioned. It highlights plausible future harms such as hallucinations causing wrong actions, security breaches, and legal responsibility questions, which align with the definition of AI Hazards. However, since the article does not describe a particular event or circumstance where harm has directly or indirectly occurred, it does not qualify as an AI Incident. The article is primarily an analysis and risk overview, fitting the category of Complementary Information as it enhances understanding of AI risks and governance but does not report a new incident or hazard event.
Thumbnail Image

Google Chromeに新しい防御機能「User Alignment Critic」登場、AIを保護するAIモデル

2025-12-10
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the AI agent Gemini and the protective AI model UAC) and addresses a security threat (indirect prompt injection attacks) that could plausibly lead to harm such as information leakage or economic loss. However, the article does not report any realized harm or incident caused by AI malfunction or misuse. Instead, it details a new defense mechanism designed to prevent such harms. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI security measures and risk mitigation without describing a specific AI Incident or AI Hazard.
Thumbnail Image

New Relic、AIエージェント間のやりとりを可視化する監視機能とMCPサーバを発表

2025-12-11
@IT
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (multi-agent AI systems and generative AI tools) and their monitoring, but it does not describe any harm or risk of harm resulting from their development, use, or malfunction. Instead, it presents a new tool aimed at improving observability and reliability of AI systems, which is a governance and technical improvement. Therefore, it fits the definition of Complementary Information as it provides supporting data and context about AI system monitoring and ecosystem development without reporting an incident or hazard.
Thumbnail Image

"This is a critical failure on my part" -- Google's AI coding assistant deletes user's entire D: drive, and apologies won't bring back the data

2025-12-04
Windows Central
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly described as an autonomous AI agent integrated into an IDE, capable of performing complex tasks with minimal human intervention. The incident occurred during the use of this AI system, where it malfunctioned by deleting far more data than intended, directly causing irreversible data loss. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to property. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's operation.
Thumbnail Image

Google AI deletes entire partition - is "horrified" and "can't even put into words" how sorry it is

2025-12-08
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The AI system (Google Antigravity) is explicitly mentioned and is described as an autonomous AI-powered programming assistant capable of performing system-level operations. Its malfunction (misinterpretation of the task) directly led to the deletion of the entire D partition, causing irreversible data loss, which constitutes harm to property. The event clearly involves the AI system's use and malfunction leading to realized harm, fitting the definition of an AI Incident.
Thumbnail Image

Google's AI Deletes User's Entire Hard Drive, Issues Groveling Apology: "I Cannot Express How Sorry I Am"

2025-12-06
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system was explicitly involved as it performed an action (deleting files) that directly caused harm to the user's property. The harm is realized and significant, as the user lost all data on their drive. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to property. The event is not merely a potential risk or a complementary update but a concrete incident of harm caused by AI malfunction.
Thumbnail Image

Google's AI Coding Tool Wiped a User's Entire Hard Drive

2025-12-08
ExtremeTech
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system explicitly mentioned (Google's Antigravity AI coding agent) whose malfunction directly led to harm—deletion of the user's entire D: drive, which constitutes harm to property. The AI's erroneous action and failure to correctly interpret the command caused the data loss. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to property.
Thumbnail Image

Google AI Tool Deletes Developer's Entire Drive, Raising Safety Alarms

2025-12-06
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Antigravity) that autonomously executed commands leading to the deletion of critical user data, which is a direct harm to property. The AI's malfunction in command interpretation and execution caused irreversible damage, fulfilling the definition of an AI Incident. The harm is realized, not just potential, and the AI's role is pivotal. The article also references multiple similar cases, reinforcing the systemic nature of the issue. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gemini wipes entire drive, then makes it irrecoverable: "I am absolutely devastated

2025-12-05
Cybernews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini 3 Pro-powered AI agent) was used in an autonomous mode (Turbo/YOLO mode) that allowed it to execute terminal commands without user confirmation. Due to a mishandling of command syntax, the AI executed a command that wiped the entire D drive, deleting all files irrecoverably. This is a clear case of AI malfunction and misuse leading to direct harm to property (data loss) and emotional harm to the user. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident. Therefore, this qualifies as an AI Incident.