AI Apps on Apple App Store Leak Data of Millions of Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Security researchers at CovertLabs uncovered that 196 out of 198 mostly AI-powered iOS apps on the Apple App Store have leaked sensitive user data, including names, emails, and chat histories. The worst offender, "Chat & Ask AI," exposed over 406 million records from more than 18 million users due to poor security practices.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI-related apps leaking user data, which is a direct harm to users' privacy and a violation of legal protections. The AI systems' use or malfunction has directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized violation of rights and harm to users caused by the AI systems' data exposure.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountability

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

App Data Exposure: Millions of Users at Risk - News Directory 3

2026-01-20
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-related apps leaking user data, which is a direct harm to users' privacy and a violation of legal protections. The AI systems' use or malfunction has directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized violation of rights and harm to users caused by the AI systems' data exposure.
Thumbnail Image

App Store apps are exposing data from millions of users - 9to5Mac

2026-01-20
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event involves AI-related apps that have leaked sensitive user data, causing harm to users' privacy and potentially violating data protection rights. The AI systems' development and use are directly linked to this harm. The exposure of personal data from millions of users is a clear realized harm, fitting the definition of an AI Incident. Although the apps' AI nature is not always explicitly confirmed, the description and context strongly indicate AI system involvement. Hence, this is not merely a hazard or complementary information but an actual incident.
Thumbnail Image

AppleInsider.com

2026-01-20
AppleInsider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbot apps) whose poor security practices have directly led to the leakage of sensitive user data affecting millions, which is a clear harm to individuals' privacy and a violation of their rights. The harm is realized, not just potential, as data has already been exposed. The AI system's use and malfunction (insecure data handling) are central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Some AI apps leak loads of your data -- see the worst offenders

2026-01-20
Cult of Mac
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-powered apps leaking sensitive user data, which is a direct harm to users' privacy and a violation of their rights. The AI systems' development and use have directly led to this harm through negligence in securing data. The scale of the exposure (hundreds of millions of records) and the nature of the data (chat histories, personal identifiers) confirm significant harm. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to violations of rights and harm to communities.
Thumbnail Image

These iPhone AI apps expose your data, and they're all over the App Store

2026-01-20
Macworld
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI apps) whose security flaws have directly led to exposure of user data, constituting a violation of user privacy and potentially legal rights. This fits the definition of an AI Incident because the development or use of these AI systems has directly led to harm (violation of rights through data exposure). The presence of a registry and disclosure process is complementary information but does not negate the fact that harm has occurred. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Firehound ranks apps that leak your data. These are the 10 worst.

2026-01-20
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI apps) that have leaked personal data, which constitutes harm to individuals' privacy and potentially violates rights. The data leaks are described as having happened or being accessible, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI systems' use or malfunction leading to data breaches.
Thumbnail Image

These highly-rated apps are leaking your data -- find out if you're affected

2026-01-20
MUO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-focused apps) whose development and use have directly led to the exposure of millions of users' personal data, including sensitive information. This exposure constitutes a violation of fundamental rights to privacy and data protection, which falls under harm category (c) - violations of human rights or breach of legal obligations protecting fundamental rights. The harm is realized as the data is already leaked and accessible, not just a potential risk. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Security Researchers Expose Major Data Leaks from 198 iOS AI Apps Affecting Millions of Users

2026-01-20
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI chatbots and AI apps) whose development and use have directly led to a massive data breach exposing private user data. This breach constitutes a violation of fundamental rights to privacy and data protection, fulfilling the criteria for harm under (c) violations of human rights or breach of applicable law. The scale and nature of the harm are clearly articulated, affecting millions of users. The involvement of AI is central, as these are AI-powered apps leaking AI-generated chat data. Hence, the event is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Critical Flaws in 196 AI iOS Apps Expose Millions' Personal Data

2026-01-21
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered apps leaking sensitive personal data due to critical security flaws, which directly harms millions of users by exposing their private information. The AI systems' involvement is clear as these apps use AI for various functions and store AI-generated or user data insecurely. The resulting data breaches constitute violations of privacy and can lead to real-world harms such as identity theft and scams. Hence, this qualifies as an AI Incident because the AI systems' use and security failures have directly caused harm to individuals and communities.
Thumbnail Image

Your Favorite AI Apps Might Be Exposing Your Personal Data Right Now: Top Data Leakers Ranked

2026-01-21
Technology Org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI applications as the main offenders in data leakage, exposing sensitive personal data. The involvement of AI systems is clear since the apps are AI chatbots and AI platforms. The harm is realized as personal data is accessible to unauthorized parties, constituting a violation of privacy rights and potentially legal obligations. This direct harm caused by the use of AI systems fits the definition of an AI Incident under violations of human rights or legal obligations protecting privacy.