Professor Loses Two Years of Academic Work After ChatGPT Data Deletion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Marcel Bucher, a professor at the University of Cologne, lost two years of academic work—including grant applications and teaching materials—after disabling ChatGPT's data consent option. The action permanently deleted his chat history without warning or recovery options, highlighting risks in AI data management and user interface design.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) whose use directly led to the permanent loss of valuable academic work, constituting harm to property and professional resources. The deletion was triggered by a user action interacting with the AI system's data consent settings, and the system's design did not provide warnings or recovery options, leading to irreversible harm. This fits the definition of an AI Incident because the AI system's use and design directly caused significant harm (loss of intellectual property and academic work). The harm is realized, not just potential, and the AI system's role is pivotal in the incident.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Education and training

Affected stakeholders
WorkersConsumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

When two years of academic work vanished with a single click

2026-01-22
Nature
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly led to the permanent loss of valuable academic work, constituting harm to property and professional resources. The deletion was triggered by a user action interacting with the AI system's data consent settings, and the system's design did not provide warnings or recovery options, leading to irreversible harm. This fits the definition of an AI Incident because the AI system's use and design directly caused significant harm (loss of intellectual property and academic work). The harm is realized, not just potential, and the AI system's role is pivotal in the incident.
Thumbnail Image

Professor Reports That OpenAI Deleted His Work, World Laughs in His Face

2026-01-23
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly caused harm by deleting valuable academic work without warning or recovery options. The harm is material and significant to the individual (loss of years of work), fitting the definition of harm to property. The incident stems from the AI system's use and its data management design, which failed to protect user data adequately. Although the harm is non-physical, it is clearly articulated and significant. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT: Professor loses two years of work

2026-01-25
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly led to the loss of valuable academic work, which is a form of harm to property (intellectual property). The deletion was triggered by a user action related to AI system settings, and the inability to recover the data due to the system's design and policies caused realized harm. Although the system has backup features, the incident shows a failure or risk in the AI system's data management that caused actual harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scientist Horrified as ChatGPT Deletes All His Research

2026-01-24
Futurism
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) whose use led to the loss of important academic work, constituting harm to property and professional interests. The deletion of data without adequate warning or recovery options is a malfunction or design flaw in the AI system's operation. The harm is realized and significant, as it involves the loss of years of structured academic work. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's use and malfunction.
Thumbnail Image

A Professor Trusted ChatGPT With Two Years of Work -- Then One Click Wiped It All Away

2026-01-23
Inc.
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model) used by the professor for various academic tasks. The loss of two years of work due to the platform's data deletion policy after disabling data consent is a direct harm caused by the AI system's use and its data management design. The harm is realized and significant, involving loss of valuable academic content. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction (data deletion without warning or recovery) directly led to harm to property (academic work).
Thumbnail Image

A professor lost two years of 'carefully structured academic work' in ChatGPT because of a single setting change: 'These tools were not developed with academic standards of reliability in mind'

2026-01-27
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and data management policies directly caused the loss of valuable academic work, a form of harm to property and academic communities. The harm is realized and significant, not merely potential. The AI system's design and operation (lack of warnings, no undo, no backups) contributed to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scientist Loses Years of Work After Tweaking ChatGPT Settings

2026-01-28
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly caused harm to the scientist by deleting two years of work. The harm is realized and significant, involving loss of intellectual property and academic productivity. The deletion was triggered by a user action on an AI system setting, and the system's design led to irreversible data loss without warning. This fits the definition of an AI Incident as the AI system's use directly led to harm to property (academic work).
Thumbnail Image

Scientist Loses Two Years Of Work After Clicking The Wrong Button On ChatGPT, And People Are Less Than Sympathetic

2026-01-26
IFLScience
Why's our monitor labelling this an incident or hazard?
The event describes a direct harm caused by the AI system's malfunction or design choice: the permanent deletion of important academic data without warning or recovery. The AI system (ChatGPT) was used extensively by the professor for academic work, and the loss of this data is a significant harm to property and professional work. The AI system's data consent feature led to irreversible data loss, which is a direct consequence of the AI system's operation and policies. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Professor Loses Two Years Of Research Work After Clicking The Wrong Button On ChatGPT

2026-01-27
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly caused permanent loss of important academic data, constituting harm to property and professional work. The harm is realized, not just potential. The incident stems from the AI system's use and design (lack of safeguards and warnings), leading to irreversible data deletion. This fits the definition of an AI Incident as the AI system's malfunction or design caused direct harm. It is not merely a hazard or complementary information, as the harm has occurred and is significant.
Thumbnail Image

ChatGPT data leak threatens years of academic work

2026-01-27
Unica Radio
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system that was used to store and manage academic work. The deletion of conversations without recovery led to the loss of valuable data, which is harm to property and professional work. This harm is directly linked to the AI system's use and its data deletion policies. Although the harm is non-physical, it is significant and clearly articulated. Hence, this event qualifies as an AI Incident.