AI-Coded Apps Leak Sensitive Data Due to Poor Security

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at RedAccess found over 5,000 web apps built with AI coding tools like Lovable, Replit, Base44, and Netlify exposed sensitive corporate and personal data due to inadequate security. These apps, often created by non-experts, were publicly accessible, leading to privacy violations and potential regulatory breaches.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI coding tools that have been used to create applications exposing sensitive data, including personal and corporate information. The exposure of this data is a direct harm to privacy and security, which is a violation of rights and a significant harm. The AI systems' default settings and ease of use without proper security controls have directly led to this harm. Although some companies argue that public apps are expected behavior, the scale and nature of the data exposed indicate a failure in the AI systems' deployment and use, leading to real harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
BusinessConsumers

Harm types
Human or fundamental rightsReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Thousands of AI-built apps exposed sensitive corporate and personal data, researchers found

2026-05-07
Axios
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI coding tools that have been used to create applications exposing sensitive data, including personal and corporate information. The exposure of this data is a direct harm to privacy and security, which is a violation of rights and a significant harm. The AI systems' default settings and ease of use without proper security controls have directly led to this harm. Although some companies argue that public apps are expected behavior, the scale and nature of the data exposed indicate a failure in the AI systems' deployment and use, leading to real harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Thousands of Vibe-Coded Apps Expose Corporate and Personal Data on the Open Web

2026-05-07
Wired
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as automated coding tools that generate web applications. The use of these AI systems has directly led to the exposure of sensitive data due to insufficient security controls in the generated apps. This exposure constitutes harm to property and communities and breaches privacy rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as sensitive data is already publicly accessible. The companies' responses do not deny the exposure but rather attribute it to user choices, which still implicates the AI systems' role in enabling the harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Your coworker's AI-built app might be leaking company secrets

2026-05-07
Digital Trends
Why's our monitor labelling this an incident or hazard?
The AI coding tools (AI systems) are directly involved in the development and deployment of apps that lack proper security, leading to actual data leaks of sensitive information such as medical records, financial data, and corporate secrets. This constitutes a violation of privacy and potentially intellectual property rights, which are harms under the AI Incident definition. The event describes realized harm, not just potential risk, and the AI system's role in enabling rapid app creation without security expertise is pivotal to the incident. Hence, this is an AI Incident.
Thumbnail Image

5,000 vibe-coded apps just proved shadow AI is the new S3 bucket crisis - RocketNews

2026-05-08
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (vibe coding platforms) whose use has directly led to the exposure of sensitive data, causing harm to individuals and organizations through privacy violations and potential regulatory breaches. The presence of phishing sites further indicates malicious misuse facilitated by these AI tools. Since the harm is realized and linked to the AI systems' development and deployment practices, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Thousands of Vibe-Coded Apps Expose Corporate and Personal Data on the Open Web

2026-05-07
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—automated AI coding tools used to create web applications. The harm is realized and direct: sensitive personal and corporate data is exposed publicly due to the insecure configuration of these AI-generated apps. This exposure constitutes a violation of privacy and potentially other rights, as well as harm to property and communities. The AI systems' use is central to the incident because these tools enable rapid app creation without proper security vetting, leading to the data exposures. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.