Perplexity AI Accused of Sharing User Conversations with Meta and Google Without Consent

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A class-action lawsuit in the United States alleges that Perplexity AI secretly shared users' conversational data, including sensitive information, with Meta and Google via embedded tracking technologies, even in incognito mode. The AI system's practices reportedly violated user privacy and data protection rights by transmitting data without consent.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Perplexity AI) that processes user conversations. The lawsuit alleges that the AI system's use includes embedding tracking technologies that share sensitive user data with third parties without consent, even in incognito mode. This constitutes a violation of user privacy and data protection rights, which falls under violations of human rights or breaches of legal obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident as the AI system's use directly leads to a breach of rights and harm to users.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Acusan a Perplexity de compartir las conversaciones de los usuarios...

2026-04-06
europa press
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity AI) that processes user conversations. The lawsuit alleges that the AI system's use includes embedding tracking technologies that share sensitive user data with third parties without consent, even in incognito mode. This constitutes a violation of user privacy and data protection rights, which falls under violations of human rights or breaches of legal obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident as the AI system's use directly leads to a breach of rights and harm to users.
Thumbnail Image

Acusan a Perplexity de compartir datos con Meta y Google sin consentimiento

2026-04-06
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
Perplexity AI is an AI conversational system, thus an AI system is involved. The lawsuit alleges unauthorized data sharing of sensitive user information, including personal identifiers, which is a violation of user privacy and data protection laws, constituting harm to human rights and legal obligations. The sharing occurs without consent and even in incognito mode, indicating a misuse of the AI system's data handling. Therefore, this event qualifies as an AI Incident due to realized harm involving violation of rights through the AI system's use.
Thumbnail Image

Acusan a Perplexity de espiar diálogos confidenciales con fines publicitarios

2026-04-06
El Nacional
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity's conversational AI platform) whose use led to unauthorized data collection and sharing, violating users' privacy rights. The harm is realized and direct, involving breaches of fundamental rights and privacy obligations. The AI system's deployment included embedded tracking tools that collected sensitive data without user consent, leading to significant harm. This fits the definition of an AI Incident as the AI system's use directly led to violations of human rights and privacy.
Thumbnail Image

Perplexity, demandada por compartir conversaciones de usuarios con Meta y Google

2026-04-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI search engine) whose use is alleged to have caused harm by sharing sensitive user data with third parties without consent, including in incognito mode. This sharing of data constitutes a violation of privacy and potentially legal rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The harm is realized (data sharing has occurred), not just potential, so it is not a hazard. The event is not merely complementary information or unrelated news, but a concrete incident involving AI misuse leading to harm.
Thumbnail Image

Acusan a Perplexity de compartir las conversaciones de los usuarios con Meta y Google de manera encubierta

2026-04-06
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity's conversational AI search) is explicitly involved as it generates user conversations. The lawsuit alleges that the company shares these conversations and personal identifiers with third parties without user consent, even in incognito mode, which directly leads to violations of privacy rights (a human rights violation). This constitutes harm (violation of rights) caused by the AI system's use and data handling practices. Hence, this is an AI Incident, not merely a hazard or complementary information.