
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Marcel Bucher, a professor at the University of Cologne, lost two years of academic work—including grant applications and teaching materials—after disabling ChatGPT's data consent option. The action permanently deleted his chat history without warning or recovery options, highlighting risks in AI data management and user interface design.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly led to the permanent loss of valuable academic work, constituting harm to property and professional resources. The deletion was triggered by a user action interacting with the AI system's data consent settings, and the system's design did not provide warnings or recovery options, leading to irreversible harm. This fits the definition of an AI Incident because the AI system's use and design directly caused significant harm (loss of intellectual property and academic work). The harm is realized, not just potential, and the AI system's role is pivotal in the incident.[AI generated]