
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Researchers at RedAccess found over 5,000 web apps built with AI coding tools like Lovable, Replit, Base44, and Netlify exposed sensitive corporate and personal data due to inadequate security. These apps, often created by non-experts, were publicly accessible, leading to privacy violations and potential regulatory breaches.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI coding tools that have been used to create applications exposing sensitive data, including personal and corporate information. The exposure of this data is a direct harm to privacy and security, which is a violation of rights and a significant harm. The AI systems' default settings and ease of use without proper security controls have directly led to this harm. Although some companies argue that public apps are expected behavior, the scale and nature of the data exposed indicate a failure in the AI systems' deployment and use, leading to real harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]