Lovable AI App Builder Exposes Sensitive User Data via API Flaw

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A critical API vulnerability in Lovable, a Stockholm-based AI app-building platform, allowed unauthorized access to sensitive data—including AI chat histories, source code, and customer records—from thousands of projects. Despite Lovable denying a data breach, unclear documentation and broken authorization led to significant privacy and security risks for users.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is involved as Lovable is an AI app-building platform handling AI chat histories and code projects. The event stems from the use and design of the AI system's visibility settings, which led to unauthorized access to sensitive data, including AI chat histories and customer records. This constitutes a violation of privacy and potentially breaches obligations to protect user data, which falls under harm to rights and possibly harm to individuals. Although the company denies a breach, the exposure of sensitive data due to unclear documentation and design is a realized harm. Therefore, this qualifies as an AI Incident because the AI system's use and design directly led to harm through unauthorized data exposure.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Lovable denies data breach, says public settings are 'intentional' - The Economic Times

2026-04-20
Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Lovable is an AI app-building platform handling AI chat histories and code projects. The event stems from the use and design of the AI system's visibility settings, which led to unauthorized access to sensitive data, including AI chat histories and customer records. This constitutes a violation of privacy and potentially breaches obligations to protect user data, which falls under harm to rights and possibly harm to individuals. Although the company denies a breach, the exposure of sensitive data due to unclear documentation and design is a realized harm. Therefore, this qualifies as an AI Incident because the AI system's use and design directly led to harm through unauthorized data exposure.
Thumbnail Image

Lovable denies data breach, says public settings are 'intentional'

2026-04-20
Economic Times
Why's our monitor labelling this an incident or hazard?
The platform is an AI app-building system, so an AI system is involved. The issue arises from the use and design of the AI system's public visibility settings, which led to unauthorized access to sensitive data including source code, credentials, and chat histories. This constitutes a violation of rights and a breach of obligations under applicable law. The harm has already occurred as demonstrated by the researcher's access and public disclosure. Although the company denies a breach, the exposure of sensitive data due to the AI system's design and unclear documentation is a direct cause of harm. Hence, this is an AI Incident.
Thumbnail Image

Lovable denies data leak, cites 'intentional behavior'

2026-04-20
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lovable's vibe coding AI tool) whose malfunction (a security flaw in API authorization) directly led to unauthorized access to sensitive user data, including credentials and chat histories. This constitutes harm to users' privacy and potentially breaches legal obligations regarding data protection. The company's failure to promptly address the vulnerability and the resulting data exposure confirm realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lovable denies mass data breach

2026-04-20
Sifted
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lovable's AI-powered vibe-coding platform) whose malfunction or design flaw allowed unauthorized access to sensitive user data, including chat histories and personal information. This exposure constitutes a violation of privacy rights and a breach of obligations under applicable law protecting personal data, fitting the definition of an AI Incident. Although the company denies a mass data breach, the unauthorized access and exposure of personal data have already occurred, indicating realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vibe-Coding Darling Lovable's Public Projects Expose Chats, Code and Secrets: No Breach, Says Startup -- But Researcher Calls Foul

2026-04-21
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lovable) that enables users to build apps by chatting with AI, and the platform's API misconfiguration led to unauthorized access to sensitive data including AI chat histories and customer information. The harm includes violations of privacy rights and exposure of intellectual property, fulfilling criteria for harm to persons and breach of rights. The AI system's malfunction (backend tweak and API authorization flaw) directly caused the incident. The exposure is not speculative but has occurred, with real data leaked and users affected. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lovable AI App Builder Hit by Reported API Flaw Exposing Thousands of Projects - IT Security News

2026-04-21
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The Lovable AI app builder is an AI system, and the reported API flaw has led to a data breach exposing sensitive information, including source code and user credentials. This constitutes harm to intellectual property rights and user privacy, which falls under violations of rights as per the AI Incident definition. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's vulnerability and its exploitation.
Thumbnail Image

Lovable AI App Builder Reportedly Exposes Thousands of Projects Data via API Flaw

2026-04-20
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lovable, an AI-powered app builder) whose API has a security flaw allowing unauthorized access to sensitive data including AI chat histories and project source code. The harm is realized as unauthorized data exposure affecting thousands of projects and users, including organizations and individuals, which constitutes violations of rights and harm to property and communities. The AI system's malfunction (broken authorization) is the direct cause of this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hot AI startup Lovable's security stumble shows one big risk in vibe coding

2026-04-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Lovable's AI-coding platform) whose malfunction (a backend permissions error) directly led to unauthorized data exposure, including AI chat histories and user code. This exposure constitutes a violation of privacy and data security, harming users and organizations. The harm is realized, not just potential, and the AI system's role is pivotal as the platform's design and permission management caused the breach. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lovable admits error in chat visibility settings, says issue fixed now - The Economic Times

2026-04-21
Economic Times
Why's our monitor labelling this an incident or hazard?
The platform uses AI to build applications via conversational interfaces, making chat histories integral to the system. The exposure of chat data and related sensitive information due to a technical error and unclear design directly led to harm in terms of privacy violations and potential breaches of user rights. Although the company denies a data breach in the traditional sense, the unauthorized visibility of private chat histories and development data is a clear harm linked to the AI system's malfunction and design flaws. The company has since fixed the issue, but the event qualifies as an AI Incident because harm occurred due to the AI system's use and malfunction.
Thumbnail Image

Lovable left AI prompts and user data exposed, one researcher found

2026-04-21
Fast Company
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the platform hosts AI chat models and user interactions with them. The exposure of chat histories and sensitive data due to a bug in the API directly led to a violation of user privacy and data protection rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized as user data was accessible to unauthorized parties. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Lovable security crisis: 48 days of exposed projects, closed bug reports, & the structural failure of vibe coding security

2026-04-21
The Next Web
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the vibe coding platform generating full-stack applications from natural language prompts) whose use has directly led to multiple security incidents exposing sensitive data and user records, constituting harm to individuals and communities. The vulnerabilities stem from AI-generated code containing security flaws, and the company's failure to properly address reported issues contributed to ongoing harm. The harms include violations of privacy and data protection rights, which fall under violations of human rights and legal obligations. The detailed description of realized harm and direct causation by the AI system's outputs confirms classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is Your Code Safe? Lovable AI Fixes Vulnerability That Leaked Database Credentials

2026-04-21
Techloy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lovable's AI coding platform) whose malfunction (a security vulnerability) directly led to unauthorized access to sensitive data, including source code and credentials, which is a violation of intellectual property rights and user privacy. This fits the definition of an AI Incident because the AI system's malfunction caused harm (data breach and exposure of confidential information). Although the company initially denied a breach, the researcher and experts confirm unauthorized access occurred. The company's subsequent fix and explanation are complementary information but do not negate the incident classification.
Thumbnail Image

Lovable AI coding platform faces scrutiny over data exposure

2026-04-21
SC Media
Why's our monitor labelling this an incident or hazard?
The AI coding platform Lovable qualifies as an AI system because it is an AI coding platform, implying AI involvement in code generation or assistance. The security flaw allowed unauthorized access to sensitive data, which is a direct harm to property and user rights. The incident involves the AI system's malfunction (permission handling error) leading to realized harm. Hence, this event meets the criteria for an AI Incident.