OpenAI Faces Lawsuit Over ChatGPT Data Sharing With Meta and Google

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI is facing a class-action lawsuit in California alleging it embedded Meta's Facebook Pixel and Google Analytics in ChatGPT, resulting in users' sensitive queries and personal data being shared with Meta and Google without consent. The suit claims this violates U.S. and California privacy laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (OpenAI's chatbot) that processes personal user data. The lawsuit alleges that the AI system's use has directly led to violations of privacy laws and unauthorized sharing of intimate personal information, constituting harm to users' rights. This meets the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting fundamental rights (privacy). The harm is realized, not just potential, as the lawsuit is filed based on actual data sharing practices. Hence, the classification is AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

OpenAI Accused of Handing Over Your Intimate Personal Information to Meta and Google

2026-05-14
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenAI's chatbot) that processes personal user data. The lawsuit alleges that the AI system's use has directly led to violations of privacy laws and unauthorized sharing of intimate personal information, constituting harm to users' rights. This meets the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting fundamental rights (privacy). The harm is realized, not just potential, as the lawsuit is filed based on actual data sharing practices. Hence, the classification is AI Incident.
Thumbnail Image

OpenAI Sued for Allegedly Sharing User Data from ChatGPT With Google, Meta

2026-05-14
Republic World
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and alleges that its use led to unauthorized sharing of sensitive user data with third parties, violating privacy laws and constitutional rights. This is a direct harm related to human rights and legal obligations. The involvement of AI in processing user queries and the subsequent data transmission to advertising platforms is central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Accused of Handing Over Your Intimate Personal Information to Meta and Google

2026-05-14
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT) that process personal and sensitive user data. The lawsuit claims that OpenAI shared this data with third parties without proper consent, violating privacy laws and users' rights. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, specifically privacy rights. The harm is realized (not just potential), as users' intimate information was allegedly shared improperly, fulfilling the criteria for an AI Incident under violations of human rights or legal obligations.
Thumbnail Image

OpenAI Sued Over ChatGPT Data Sharing With Meta, Google

2026-05-14
BeInCrypto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to alleged violations of privacy rights through unauthorized data sharing with third parties (Meta and Google). This constitutes a breach of obligations under applicable law protecting fundamental rights (privacy), fitting the definition of an AI Incident. The harm is realized (lawsuit filed for damages and injunction), and the AI system's role is pivotal as the data originates from user interactions with ChatGPT. The event is not merely a potential risk or complementary information but a concrete legal claim of harm caused by AI system use.
Thumbnail Image

ChatGPT allegedly shared users' query topics, user IDs, and email addresses with Google and Meta, new class action lawsuit claims - Tech Startups

2026-05-14
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) that processes sensitive user data. The alleged secret transmission of this data to third parties without consent constitutes a breach of privacy laws and users' rights. This harm is realized and ongoing as per the lawsuit's claims, meeting the criteria for an AI Incident. The involvement of AI is explicit, and the harm relates to violations of legal and fundamental rights, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Hit with Class-Action Privacy Lawsuit for Sharing ChatGPT Data with Google and Meta

2026-05-14
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and details how its use has led to alleged unauthorized sharing of sensitive user data with third parties, constituting a violation of privacy laws and users' rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law intended to protect fundamental rights (privacy rights). The harm is realized and ongoing, as evidenced by the lawsuit and the detailed allegations of data sharing and interception. Thus, the classification as an AI Incident is appropriate.
Thumbnail Image

ChatGPT maker OpenAI sued for sharing chatbot queries with Meta, Google

2026-05-14
Cybernews
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and alleges privacy violations through data sharing with third parties, which could be considered a breach of privacy rights. However, the harm is alleged and not confirmed or legally established. The focus is on the lawsuit and potential legal reforms rather than on a concrete AI Incident causing realized harm or a credible AI Hazard indicating plausible future harm. The discussion of privacy policies, common industry practices, and potential legal reforms fits the definition of Complementary Information, as it informs about governance and societal responses to AI-related privacy issues without reporting a new AI Incident or AI Hazard.