
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A critical API vulnerability in Lovable, a Stockholm-based AI app-building platform, allowed unauthorized access to sensitive data—including AI chat histories, source code, and customer records—from thousands of projects. Despite Lovable denying a data breach, unclear documentation and broken authorization led to significant privacy and security risks for users.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Lovable is an AI app-building platform handling AI chat histories and code projects. The event stems from the use and design of the AI system's visibility settings, which led to unauthorized access to sensitive data, including AI chat histories and customer records. This constitutes a violation of privacy and potentially breaches obligations to protect user data, which falls under harm to rights and possibly harm to individuals. Although the company denies a breach, the exposure of sensitive data due to unclear documentation and design is a realized harm. Therefore, this qualifies as an AI Incident because the AI system's use and design directly led to harm through unauthorized data exposure.[AI generated]