Perplexity AI Accused of Sharing User Conversations with Meta and Google Without Consent

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A class-action lawsuit in the United States alleges that Perplexity AI secretly shared users' conversational data, including sensitive information, with Meta and Google via embedded tracking technologies, even in incognito mode. The AI system's practices reportedly violated user privacy and data protection rights by transmitting data without consent.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Perplexity AI) that processes user conversations. The lawsuit alleges that the AI system's use includes embedding tracking technologies that share sensitive user data with third parties without consent, even in incognito mode. This constitutes a violation of user privacy and data protection rights, which falls under violations of human rights or breaches of legal obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident as the AI system's use directly leads to a breach of rights and harm to users.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Acusan a Perplexity de compartir las conversaciones de los usuarios...

2026-04-06
europa press
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity AI) that processes user conversations. The lawsuit alleges that the AI system's use includes embedding tracking technologies that share sensitive user data with third parties without consent, even in incognito mode. This constitutes a violation of user privacy and data protection rights, which falls under violations of human rights or breaches of legal obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident as the AI system's use directly leads to a breach of rights and harm to users.
Thumbnail Image

Acusan a Perplexity de compartir datos con Meta y Google sin consentimiento

2026-04-06
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
Perplexity AI is an AI conversational system, thus an AI system is involved. The lawsuit alleges unauthorized data sharing of sensitive user information, including personal identifiers, which is a violation of user privacy and data protection laws, constituting harm to human rights and legal obligations. The sharing occurs without consent and even in incognito mode, indicating a misuse of the AI system's data handling. Therefore, this event qualifies as an AI Incident due to realized harm involving violation of rights through the AI system's use.
Thumbnail Image

Acusan a Perplexity de espiar diálogos confidenciales con fines publicitarios

2026-04-06
El Nacional
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity's conversational AI platform) whose use led to unauthorized data collection and sharing, violating users' privacy rights. The harm is realized and direct, involving breaches of fundamental rights and privacy obligations. The AI system's deployment included embedded tracking tools that collected sensitive data without user consent, leading to significant harm. This fits the definition of an AI Incident as the AI system's use directly led to violations of human rights and privacy.
Thumbnail Image

Perplexity, demandada por compartir conversaciones de usuarios con Meta y Google

2026-04-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI search engine) whose use is alleged to have caused harm by sharing sensitive user data with third parties without consent, including in incognito mode. This sharing of data constitutes a violation of privacy and potentially legal rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The harm is realized (data sharing has occurred), not just potential, so it is not a hazard. The event is not merely complementary information or unrelated news, but a concrete incident involving AI misuse leading to harm.
Thumbnail Image

Acusan a Perplexity de compartir las conversaciones de los usuarios con Meta y Google de manera encubierta

2026-04-06
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity's conversational AI search) is explicitly involved as it generates user conversations. The lawsuit alleges that the company shares these conversations and personal identifiers with third parties without user consent, even in incognito mode, which directly leads to violations of privacy rights (a human rights violation). This constitutes harm (violation of rights) caused by the AI system's use and data handling practices. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Una demanda alega que el 'Modo Incógnito' de Perplexity es una 'farsa'

2026-04-08
Urban Tecno
Why's our monitor labelling this an incident or hazard?
Perplexity AI is an AI system combining real-time search with advanced language models. The lawsuit alleges that its use leads to unauthorized sharing of sensitive user data with third parties, including when users activate 'Incognito Mode', which should protect privacy. This sharing of personal data without consent is a violation of privacy rights and applicable laws protecting fundamental rights. The harm is realized and directly linked to the AI system's use and design, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

Use Perplexity? Lawsuit Accuses It of Sharing Personal Data With Google and Meta Without Permission

2026-04-05
PC Magazine
Why's our monitor labelling this an incident or hazard?
Perplexity is an AI system that engages users in interactive dialogues, and the lawsuit claims it shared personal data without consent, violating privacy rights and laws. This constitutes a violation of human rights and legal obligations related to privacy and data protection, which fits the definition of an AI Incident. The harm (privacy violation) has already occurred as per the complaint, and the AI system's use is directly linked to this harm.
Thumbnail Image

Use Perplexity? Lawsuit Accuses It of Sharing Personal Data With Google and Meta Without Permission

2026-04-05
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
Perplexity is an AI system that engages users in interactive conversations, and the lawsuit claims it shared personal data collected during these interactions without permission, violating privacy rights and laws. This is a direct harm to users' rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of legal obligations. The involvement of AI in processing and sharing sensitive user data without consent directly led to the alleged harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perplexity is being sued for allegedly sharing user data with Meta and Google -- here's what we know so far

2026-04-03
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI) whose use is alleged to have directly led to violations of privacy rights and potential harm to users through unauthorized data sharing. The lawsuit claims that the AI system's 'Incognito' mode does not prevent data leakage, resulting in sensitive user information being shared with third parties without consent. This constitutes a violation of human rights and privacy laws, which fits the definition of an AI Incident. The harm is realized (not just potential), and the AI system's role is pivotal in causing this harm.
Thumbnail Image

Aravind Srinivas's Perplexity AI faces lawsuit over sharing users data without consent; company responds

2026-04-01
The Times of India
Why's our monitor labelling this an incident or hazard?
Perplexity AI's chatbot is an AI system that processes user inputs and generates responses. The lawsuit claims that the AI system's use involved unauthorized sharing of sensitive user data with third parties (Google and Meta) without consent, which is a violation of privacy rights and applicable data protection laws. This harm is directly linked to the AI system's operation and data handling, constituting an AI Incident under the framework as it involves violations of rights and legal obligations due to the AI system's use.
Thumbnail Image

Perplexity AI machine accused of sharing data with Meta, Google

2026-04-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's chatbot) whose use has allegedly caused harm by sharing sensitive personal data without consent, violating privacy and legal protections. The harm is realized and ongoing as per the lawsuit, fulfilling the criteria for an AI Incident. The involvement of AI in processing user conversations and the resulting privacy violations directly link the AI system to the harm described.
Thumbnail Image

Perplexity AI accused of embedding 'undetectable' trackers for secretly routing sensitive user data to Meta and Google | Mint

2026-04-01
mint
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity's agentic shopping feature) is explicitly mentioned as automating order placement and accessing customer accounts covertly, which involves AI use. The lawsuit alleges that this AI use has directly led to security risks and unauthorized access, which are harms to property and potentially violations of legal rights. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Perplexity AI Machine Accused of Sharing Data With Meta, Google

2026-04-01
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI's chatbot) whose use led to unauthorized sharing of sensitive personal data, violating privacy laws and users' rights. The harm is realized (privacy violation and potential exploitation of data), and the AI system's role is pivotal as it collects and transmits the data. Therefore, this meets the criteria for an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

Perplexity AI accused of using secret trackers to share your data with Meta and Google

2026-04-01
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity AI's AI-powered search engine) whose use allegedly leads to unauthorized data collection and sharing, violating privacy laws and user rights. The harm is realized as it involves breaches of legal obligations and fundamental rights related to privacy. The involvement of AI in processing user conversations and the embedding of tracking software that transmits this data without consent directly links the AI system's use to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawsuit accuses Perplexity of sharing conversations with Meta and Google.

2026-04-03
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity's AI search engine) is explicitly involved as it processes user conversations. The lawsuit alleges that the system shared sensitive user data with third parties without adequate privacy safeguards, directly implicating the AI system's use in violating privacy rights. This constitutes a breach of obligations under applicable law protecting fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as user privacy has been compromised, and the AI system's role is pivotal in this harm.
Thumbnail Image

Perplexity AI accused of sharing users' personal data with Meta, Google

2026-04-01
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's AI chatbot/search engine) whose use has directly led to violations of users' privacy rights, a breach of legal obligations protecting fundamental rights. The sharing of personal data without consent constitutes harm under the framework's category (c) violations of human rights or breach of applicable law. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Perplexity AI sued over alleged user data sharing with Meta and Google

2026-04-01
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Perplexity AI search engine) and alleges that its use involved covert data sharing with third parties without user consent, potentially breaching privacy laws. This constitutes a violation of users' rights, a recognized harm under the AI Incident definition. The harm is realized as the data sharing allegedly already occurred, not just a potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perplexity AI machine accused of sharing data with Meta, Google

2026-04-02
The Star
Why's our monitor labelling this an incident or hazard?
Perplexity AI's chatbot is an AI system that processes user conversations. The lawsuit claims that the AI system's use involves embedding tracking software that shares sensitive personal data with third parties without user consent, violating privacy laws. This is a direct harm to users' privacy rights and legal protections. The involvement of the AI system in collecting and transmitting data is central to the harm. Hence, this qualifies as an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

Perplexity's "Incognito Mode" is a "sham," lawsuit says

2026-04-02
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's AI chat engine) whose use has directly led to harm in the form of privacy violations and breaches of legal obligations protecting personal data. The lawsuit alleges that sensitive user data, including PII and health information, is shared without consent, constituting a breach of rights and legal protections. The AI system's operation and data handling practices are central to the harm. This fits the definition of an AI Incident, as the AI system's use has directly led to violations of human rights and legal obligations (privacy rights).
Thumbnail Image

Perplexity 'Incognito' chats might not be so private, lawsuit claims

2026-04-03
Android Authority
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's chat AI) whose use has directly led to violations of user privacy rights by sharing sensitive information without consent. The harm is concrete and ongoing, as users' personally identifiable information was allegedly exposed to third parties, constituting a breach of legal and fundamental rights protections. The presence of ad trackers integrated with the AI system and the lack of user consent further support the classification as an AI Incident. The lawsuit and potential penalties underscore the seriousness of the harm caused.
Thumbnail Image

Is Your AI Chatbot Snitching? A New Lawsuit Alleges This Company Shared Data With Tech Giants

2026-04-03
Inc.
Why's our monitor labelling this an incident or hazard?
The AI system (Perplexity chatbot) is explicitly involved as it processes user conversations. The alleged sharing of personal and sensitive data without consent constitutes a violation of user rights and privacy, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm (privacy violation) has already occurred and is the subject of a legal complaint, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perplexity's privacy lawsuit bombshells will make you sweat about using the AI tool

2026-04-03
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity) and its use. The lawsuit alleges that the AI system's operation led to unauthorized sharing of sensitive personal data, which is a violation of privacy rights and data protection laws, thus constituting harm under the framework's category (c) violations of human rights or breach of obligations under applicable law. The harm is directly linked to the AI system's use and its failure to protect user data as promised. Although the allegations are not yet proven, the event describes realized harm (data sharing without consent) rather than a mere potential risk, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perplexity AI sued over alleged data sharing with Meta, Google

2026-04-01
NewsBytes
Why's our monitor labelling this an incident or hazard?
The presence of hidden tracking mechanisms embedded by an AI system (Perplexity AI) that collects sensitive user data without consent constitutes a violation of privacy rights, a fundamental human right protected by law. Since the complaint alleges actual unauthorized data sharing and privacy breaches, this is a realized harm linked directly to the AI system's use, qualifying it as an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

Perplexity AI Caught Sharing Your Private Chats with Meta and Google

2026-04-02
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity AI chatbot) whose use has directly led to harm in the form of privacy violations and unauthorized data sharing with third parties. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, specifically privacy rights. The harm is not hypothetical but alleged to have already occurred, making this an AI Incident rather than a hazard or complementary information. The involvement of hidden tracking software linked to the AI system's use and the resulting privacy breach fits the definition of an AI Incident under violations of human rights or breach of legal obligations.
Thumbnail Image

Perplexity, Meta And Google Hit With Privacy Suit

2026-04-01
MediaPost
Why's our monitor labelling this an incident or hazard?
The complaint explicitly involves an AI system (Perplexity's AI search engine) whose use has directly resulted in the unauthorized sharing of private user data, including sensitive health and financial information, with third parties. This sharing allegedly breaches privacy laws, including wiretapping statutes, thus constituting a violation of fundamental rights and legal obligations. The harm is realized and ongoing, meeting the criteria for an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

Ruffling Privacy: Perplexity, Meta And Google Hit With Suit Over Alleged Data Sharing

2026-04-03
MediaPost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically Perplexity's AI machine that generates conversation transcripts. The alleged harm is a violation of privacy rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The sharing of sensitive data without consent and its exploitation for advertising and resale constitutes a direct or indirect harm caused by the use of AI systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to a violation of rights.
Thumbnail Image

Perplexity AI faces lawsuit over alleged 'undetectable' data tracking linked to Meta Platforms and Google

2026-04-01
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Perplexity AI's chatbot and automated shopping feature) whose use allegedly causes harm through unauthorized data sharing and account access. The harms include violations of privacy laws, unauthorized surveillance, and security risks to user accounts, which fall under violations of human rights and harm to individuals. The involvement of AI in these harms is direct, as the AI systems' operation is central to the alleged misconduct. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Perplexity AI sued over alleged data sharing with Meta and Google

2026-04-01
The Decoder
Why's our monitor labelling this an incident or hazard?
Perplexity AI is an AI system involved in processing user chat data. The lawsuit alleges that the AI system's use involves unauthorized sharing of personal and sensitive data with third parties, which is a breach of privacy and legal protections. This constitutes a violation of human rights and legal obligations related to data protection. The harm is realized as users' private information is exposed without consent. Hence, this is an AI Incident due to the direct link between the AI system's use and the alleged harm.
Thumbnail Image

Google Named in Perplexity Lawsuit -- Court Filing Says Private Chats Were Shared

2026-04-02
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The complaint explicitly involves an AI system (Perplexity AI chatbot) whose use allegedly led to unauthorized sharing of private user data with third parties, including Google and Meta. This constitutes a violation of privacy rights and applicable laws, fitting the definition of harm under AI Incident (c) - violations of human rights or breach of legal obligations protecting fundamental rights. The AI system's development and use are implicated in the harm, and the court filings and injunction demonstrate the seriousness of the issue. The event is not merely a potential risk but an active legal claim of realized harm, thus it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is Perplexity Incognito Mode really private? Lawsuit says otherwise

2026-04-04
Techlusive
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Perplexity AI chat platform) and describes a lawsuit alleging that the AI system's use leads to violations of user privacy rights by sharing personal data and conversations with third parties despite the use of Incognito Mode. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, specifically privacy rights. The harm is realized as users' private data is allegedly shared without proper consent or disclosure, which fits the definition of an AI Incident. The involvement is through the AI system's use and data handling practices, not just potential harm, so it is not merely a hazard or complementary information. The event is not unrelated as it directly concerns AI system use and harm.
Thumbnail Image

Une " supercherie " : le mode Incognito de Perplexity est accusé d'espionner vos conversations

2026-04-03
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Perplexity's AI chatbot with an 'Incognito mode') whose use has directly led to harm: unauthorized sharing of sensitive user conversations with third-party advertising giants, violating user privacy and potentially legal protections. This constitutes a violation of rights under applicable law, fitting the definition of an AI Incident. The harm is realized and ongoing as per the lawsuit's claims, not merely potential. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Le mode (pas si) Incognito de Perplexity accusé de partager vos données avec Google et Meta

2026-04-03
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity's AI search engine) whose use has directly led to harm in the form of privacy violations and unauthorized data sharing, which breaches fundamental rights. The complaint details how the AI system's operation and associated tracking mechanisms have caused actual harm to users by exposing sensitive information, thus meeting the criteria for an AI Incident. The involvement of Google and Meta as recipients of the data further underscores the scale and impact of the harm. Since harm is realized and linked to the AI system's use, this is not merely a potential hazard or complementary information but an AI Incident.
Thumbnail Image

Perplexity accusé de violer la confidentialité de ses utilisateurs

2026-04-05
24matins.fr
Why's our monitor labelling this an incident or hazard?
An AI system (Perplexity's AI conversational assistant) is explicitly involved, and its use has directly led to harm in the form of violations of user privacy rights and potential breaches of applicable privacy laws. The sharing of sensitive data without consent constitutes a breach of fundamental rights and legal obligations, fitting the definition of an AI Incident under violations of human rights and applicable law. The harm is realized, not just potential, as evidenced by the lawsuit and detailed allegations.
Thumbnail Image

Perplexity dans la tourmente : des données personnelles d'utilisateurs auraient fuité vers Meta et Google - Siècle Digital

2026-04-02
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Perplexity's AI search engine) whose use has directly led to harm in the form of privacy violations and breaches of legal protections. The sharing of sensitive user data with third parties without consent constitutes a violation of rights under applicable laws, fulfilling the criteria for an AI Incident. The complaint is detailed and based on technical evidence, indicating realized harm rather than potential harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Perplexity promettait d'être l'anti-Google avec son navigateur IA : une plainte affirme qu'il transmettait vos données à Google et Meta~? même en mode incognito qui est qualifié de " mascarade "

2026-04-03
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perplexity AI's search engine and browser) whose use has directly led to harm: unauthorized transmission of sensitive user data to third parties (Meta and Google), violating privacy laws and user rights. The harm is realized, not hypothetical, as the lawsuit alleges actual data leakage and deception. The AI system's role is pivotal because it processes user conversations and supposedly transmits them secretly. This fits the definition of an AI Incident involving violations of human rights and legal obligations protecting privacy.
Thumbnail Image

0

2026-04-03
developpez.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Perplexity's AI search engine and AI browser Comet) whose use has directly led to harm: unauthorized data transmission violating privacy laws and unauthorized system access. The harms include violations of human rights (privacy), breach of legal obligations, and potential fraud. The detailed allegations and legal actions confirm realized harm rather than mere potential risk. Hence, the classification as an AI Incident is appropriate.