Major AI Chatbots Leak User Conversations to Advertising Trackers

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study reveals that leading AI chatbots—ChatGPT, Claude, Grok, and Perplexity—have been leaking sensitive user conversation data to third-party advertising companies like Meta, Google, and TikTok. This data sharing enables user profiling and targeted advertising, constituting a significant privacy violation and breach of data protection regulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (chatbots) using tracking technologies that collect and share sensitive user data with third parties without adequate transparency or consent, violating privacy and data protection rights. This constitutes a breach of applicable law protecting fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as user data is being collected and potentially exposed, even if no third-party access has been confirmed yet. The AI systems' use is central to this harm, as the trackers are integrated within the AI platforms and enable this data collection.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Investigación revela que ChatGPT y otras IA utilizan rastreadores de navegación para conocer más al usuario

2026-05-05
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots) using tracking technologies that collect and share sensitive user data with third parties without adequate transparency or consent, violating privacy and data protection rights. This constitutes a breach of applicable law protecting fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as user data is being collected and potentially exposed, even if no third-party access has been confirmed yet. The AI systems' use is central to this harm, as the trackers are integrated within the AI platforms and enable this data collection.
Thumbnail Image

Hay chats privados de IA en abierto por internet: los crecientes riesgos para la privacidad de los principales chatbots

2026-05-05
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to privacy harms by exposing private user conversations to third-party advertising companies without adequate consent or access controls. The presence of third-party trackers integrated into AI chatbots and the public availability of full chat transcripts constitute a breach of privacy rights and data protection laws, fulfilling the criteria for an AI Incident under violations of human rights and privacy. The harm is realized, not merely potential, as private data is accessible and shared. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

ChatGPT, Claude, Grok y Perplexity filtran datos de sus conversaciones para el rastreo publicitario

2026-05-05
eldiario.es
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots powered by large language models) whose use has directly led to harm: unauthorized sharing of sensitive user conversation data with advertising companies, violating privacy and data protection regulations. This constitutes a breach of fundamental rights and legal obligations, fitting the definition of an AI Incident. The study documents actual data leaks and ongoing practices causing harm, not just potential risks, so it is not merely a hazard or complementary information. The involvement of AI in generating and handling the conversations is central, and the harm is realized through privacy violations and unauthorized profiling.
Thumbnail Image

La IA con la que chateas está filtrando tus conversaciones

2026-05-05
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models/chatbots) whose use is linked to structural privacy vulnerabilities that could lead to harm through exposure of sensitive user data. Although no realized harm is confirmed, the described risks and insecure controls plausibly could lead to violations of privacy and confidentiality, which fall under violations of human rights or breach of obligations to protect fundamental rights. Therefore, this qualifies as an AI Hazard because the development and use of these AI systems could plausibly lead to an AI Incident involving harm to users' rights.
Thumbnail Image

La IA con la que chateas está filtrando tus conversaciones

2026-05-05
El Diario de Ibiza
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models/chatbots) whose use includes integration with third-party trackers that expose sensitive user data. Although no actual harm has been confirmed, the described vulnerabilities and potential for data exposure represent a credible risk of harm to users' privacy and rights. Therefore, this situation fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to violations of fundamental rights and privacy breaches, but no direct harm has yet been established.
Thumbnail Image

Riesgos de privacidad en chatbots de IA revelados

2026-05-05
7dias.com.do
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots using large language models) and their use leading to privacy risks through third-party trackers accessing private conversations. The exposure of entire chat logs publicly by default and sharing conversation titles with third parties reveals sensitive personal information, constituting a violation of privacy rights. The risk is materialized as the data is accessible, even if no confirmed misuse has occurred yet, which is sufficient to classify as harm under violations of human rights. Hence, this is an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

ChatGPT, Claude, Grok y Perplexity filtran datos de sus conversaciones para el rastreo publicitario

2026-05-05
elDiarioAR.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots powered by large language models) whose use has directly led to the unauthorized sharing of sensitive user data with third-party advertising companies. This sharing breaches data protection regulations and users' privacy rights, constituting a violation of human rights and legal obligations. The harm is realized, not just potential, as the data exposure is ongoing or has occurred. The involvement of AI in generating and managing these conversations is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.