Canadian Privacy Authorities Find OpenAI's ChatGPT Violated Privacy Laws

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Canadian federal and provincial privacy commissioners found that OpenAI violated privacy laws by collecting and using Canadians' personal data without valid consent during ChatGPT's development. The investigation revealed over-collection, lack of transparency, and obstacles for individuals to access or correct their data. OpenAI has since taken remedial steps.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and use of an AI system (ChatGPT) and highlights violations of privacy laws, which are legal protections related to fundamental rights. The collection and use of sensitive personal data without proper consent or notification constitutes a breach of obligations under applicable law protecting fundamental rights. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations directly linked to the AI system's development and use.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Consumer services

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Canadians can trust ChatGPT to handle personal data, federal privacy watchdog says after review | Watch News Videos Online

2026-05-06
Global News
Why's our monitor labelling this an incident or hazard?
The article focuses on the privacy commissioner's assessment and ongoing improvements to ChatGPT's data handling, which is a governance and societal response to prior concerns. There is no indication of new or ongoing harm caused by the AI system, nor a plausible future harm event described. Therefore, this is Complementary Information providing context and updates on AI system governance and compliance.
Thumbnail Image

OpenAI violated Canadian privacy laws in training ChatGPT, probe finds

2026-05-06
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) and highlights violations of privacy laws, which are legal protections related to fundamental rights. The collection and use of sensitive personal data without proper consent or notification constitutes a breach of obligations under applicable law protecting fundamental rights. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations directly linked to the AI system's development and use.
Thumbnail Image

OpenAI violated Canadian privacy laws in developing first ChatGPT model, probe finds

2026-05-06
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose development and use led to violations of privacy laws, a breach of legal obligations protecting fundamental rights. The investigation found that OpenAI collected personal data without proper consent and safeguards, which directly harmed individuals' privacy rights. The fact that OpenAI has since made changes does not negate the occurrence of harm. Therefore, this is an AI Incident due to realized harm from the AI system's development and use.
Thumbnail Image

Report on OpenAI expected from federal, provincial privacy watchdogs today

2026-05-06
The Star
Why's our monitor labelling this an incident or hazard?
The article mentions an investigation into OpenAI's handling of personal information, which involves the use of AI systems (ChatGPT). However, the article does not describe any realized harm or incident resulting from this investigation; rather, it announces the upcoming release of findings. Therefore, this is not an AI Incident or AI Hazard but rather a governance and regulatory update providing complementary information about AI oversight and privacy concerns.
Thumbnail Image

ChatGPT collected 'vast amounts' of Canadians' data without consent, privacy commissioners say

2026-05-06
The Star
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use included collecting personal data without valid consent, violating privacy laws protecting fundamental rights. This constitutes a breach of obligations under applicable law intended to protect fundamental and privacy rights, fitting the definition of an AI Incident. The investigation confirms that harm has occurred due to unauthorized data use, and the AI system's role is pivotal. The company's remedial actions do not negate the fact that the incident took place.
Thumbnail Image

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

2026-05-06
Pulse24.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development and use. The privacy violations constitute a breach of legal obligations protecting fundamental rights, which fits the definition of harm under AI Incident. However, the article reports on the outcome of a completed investigation and the steps OpenAI is taking to remediate past issues, rather than describing a new or ongoing incident causing harm. The focus is on regulatory findings, commitments, and improvements, which aligns with Complementary Information as it updates understanding and governance responses related to AI harms. There is no new direct or indirect harm currently occurring or a plausible future harm described that would justify classification as an AI Incident or AI Hazard.
Thumbnail Image

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

2026-05-06
Castanet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use led to violations of privacy laws, which are legal protections of fundamental rights. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, fitting the definition of an AI Incident. The harms are realized, as the investigation found that personal data was collected and used improperly, exposing individuals to risks such as breaches and discrimination. The article does not merely discuss potential risks or future harms but reports on an official finding of non-compliance and harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Canadian officials claim OpenAI violated federal and provincial privacy laws - Engadget

2026-05-06
engadget
Why's our monitor labelling this an incident or hazard?
The event involves OpenAI's AI systems used for training large language models, which is an AI system by definition. The investigation found that OpenAI collected and used personal data without proper consent, violating Canadian privacy laws, which is a breach of legal obligations protecting fundamental rights. Additionally, the failure to escalate flagged violent content to law enforcement indirectly contributed to a mass shooting, representing harm to persons. These factors meet the criteria for an AI Incident, as the AI system's development and use directly and indirectly led to violations of rights and harm. The company's commitments to remediate do not negate the occurrence of harm. Thus, the event is classified as an AI Incident.
Thumbnail Image

Alberta privacy watchdog aims for tougher rules in wake of OpenAI violations

2026-05-06
Edmonton Journal
Why's our monitor labelling this an incident or hazard?
The article describes a concluded federal investigation that found OpenAI's AI system used Canadians' personal data without valid consent, violating privacy laws. This is a direct violation of legal obligations protecting individual rights, thus constituting an AI Incident. The involvement of the AI system in processing personal data without consent directly led to a breach of rights. The call for tougher regulation is a complementary response but does not negate the incident classification.
Thumbnail Image

Canadian probe finds ChatGPT maker OpenAI violated privacy laws

2026-05-06
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The investigation explicitly links the development and use of the AI system ChatGPT to violations of privacy laws, which are legal protections of fundamental rights. The harms include over-collection of personal data without valid consent, lack of transparency, and obstacles for individuals to access or correct their data. These are direct harms to individuals' privacy rights caused by the AI system's development and use. The event also mentions OpenAI's remedial actions but confirms the complaint is well-founded, indicating realized harm. Hence, this is an AI Incident due to violations of human rights and legal obligations caused by the AI system.
Thumbnail Image

Report on OpenAI expected from federal, provincial privacy watchdogs today

2026-05-06
Winnipeg Free Press
Why's our monitor labelling this an incident or hazard?
The investigation concerns the use of an AI system (OpenAI's ChatGPT) and potential violations of privacy rights, which fall under violations of human rights or legal obligations protecting fundamental rights. However, the article only mentions the upcoming report and investigation findings without detailing any confirmed harm or incidents. Therefore, this is a complementary information event providing an update on regulatory scrutiny and oversight related to AI privacy concerns, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Report on OpenAI expected from federal, provincial privacy watchdogs today

2026-05-06
thespec.com
Why's our monitor labelling this an incident or hazard?
The article centers on a privacy investigation and an upcoming report by privacy authorities regarding OpenAI's AI system ChatGPT. There is no explicit mention of realized harm or incidents caused by the AI system, only that a complaint was under investigation. This fits the definition of Complementary Information as it provides context and updates on governance and oversight related to AI, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

ChatGPT collected 'vast amounts' of Canadian data without consent

2026-05-06
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose development and use led to violations of privacy laws, a breach of fundamental rights. The investigation found that ChatGPT collected personal data without valid consent and transparency, which is a direct harm to individuals' privacy rights. The AI system's role in these violations is clear and central. Although OpenAI has taken remedial actions, the initial harm has already occurred. The mention of a related lawsuit concerning a mass shooting linked to ChatGPT's use further underscores the AI system's involvement in significant harm. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

2026-05-06
Sudbury.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and training involved data collection practices that breached privacy laws, constituting a violation of legal obligations protecting fundamental rights. This fits the definition of an AI Incident because the AI system's development and use directly led to a breach of applicable law intended to protect fundamental rights (privacy).
Thumbnail Image

Privacy commissioner to release results of investigation into OpenAI's ChatGPT

2026-05-06
Yahoo
Why's our monitor labelling this an incident or hazard?
The article centers on a privacy investigation and regulatory scrutiny of ChatGPT's compliance with privacy laws, which is a governance and societal response to AI use. There is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm as defined by the framework. The mention of the shooter’s exchanges with ChatGPT and the company's decision not to alert police is background context but does not describe a realized harm or incident caused by the AI system. Therefore, this event is best classified as Complementary Information, as it provides updates on regulatory and governance responses related to AI privacy concerns.
Thumbnail Image

Alberta privacy watchdog aims for tougher rules in wake of OpenAI violations

2026-05-06
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use involved privacy violations, constituting a breach of legal obligations protecting personal information, which is a form of harm under the framework. However, the article reports on the investigation's findings and regulatory recommendations after the fact, with no new harm occurring at the time of reporting and no ongoing incident described. The main focus is on the regulatory and governance response to past violations and the push for stronger rules, which fits the definition of Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI did not respect Canadian privacy laws in developing ChatGPT,...

2026-05-06
National Newswatch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and its development process (data collection for training). The investigation found violations of privacy laws, which constitute breaches of obligations intended to protect fundamental rights. Since the AI system's development led to these legal violations, this qualifies as an AI Incident under the framework. There is direct harm in terms of rights violations, even if physical harm is not involved.
Thumbnail Image

Canadian privacy commissioners call for updated laws for the AI era - The Logic

2026-05-06
The Logic
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's models) and concerns about violations of privacy rights due to data scraping without consent, which constitutes a breach of obligations under applicable law protecting fundamental rights. Since the investigation found that this scraping occurred and led to privacy violations, this qualifies as an AI Incident. The article also discusses responses and potential future legal changes, but the primary focus is on the realized harm from AI system development/use (data scraping).
Thumbnail Image

OpenAI did not respect Canadian privacy laws in developing ChatGPT, probe finds

2026-05-07
National Observer
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and documents direct violations of privacy laws, which are legal frameworks protecting fundamental rights. The collection and use of sensitive personal data without proper consent or notification, as well as inadequate mechanisms for data correction and deletion, represent breaches of obligations under applicable law intended to protect fundamental rights. The investigation confirms that these harms have occurred, and OpenAI's prior models were trained in ways contravening privacy laws. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations caused by the AI system's development and use.
Thumbnail Image

Alberta privacy watchdog aims for tougher rules in wake of OpenAI violations

2026-05-06
Edmonton Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT, a large language model) whose development and use have led to violations of privacy laws, which are breaches of legal obligations protecting fundamental rights. This fits the definition of an AI Incident under category (c) for violations of human rights or legal obligations. Although the article does not detail specific harms like data breaches or misuse consequences, the failure to meet transparency standards and inability for users to control their data constitute a breach of rights. Therefore, this is classified as an AI Incident.
Thumbnail Image

Federal and provincial privacy watchdogs say OpenAI violated Canadian privacy laws | BetaKit

2026-05-06
BetaKit
Why's our monitor labelling this an incident or hazard?
OpenAI's use of scraped personal data without consent to train its AI models directly breaches Canadian privacy laws, which protect individuals' fundamental rights to privacy. The involvement of an AI system (ChatGPT's training) is explicit, and the harm is a violation of legal obligations and human rights related to data privacy. Although no physical harm is described, the breach of privacy rights is a recognized form of harm under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Report on OpenAI expected from federal, provincial privacy watchdogs today

2026-05-06
CHAT News Today
Why's our monitor labelling this an incident or hazard?
The report concerns the use of an AI system (ChatGPT) and its potential violation of privacy rights through data handling practices. Since the investigation is about past or ongoing use of the AI system that may have led to violations of privacy laws, this relates to a breach of obligations under applicable law protecting fundamental rights. The event indicates a completed investigation and report release, implying the issue has materialized rather than being a future risk. Therefore, it qualifies as an AI Incident due to the AI system's use leading to potential or actual violations of privacy rights.
Thumbnail Image

Report on OpenAI expected from federal, provincial privacy watchdogs today - Medicine Hat News

2026-05-06
Medicine Hat News
Why's our monitor labelling this an incident or hazard?
The article discusses an ongoing investigation and an upcoming report by privacy authorities regarding OpenAI's handling of personal data. While it involves AI systems and potential privacy harms, the article does not indicate that harm has already occurred or that the AI system's use has directly or indirectly led to realized harm. Instead, it focuses on the investigation and regulatory scrutiny, which are governance and societal responses to potential or alleged issues. Therefore, this event is best classified as Complementary Information, as it provides context and updates on regulatory oversight related to AI privacy concerns without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Canadian privacy czars call out 'several concerns' with how OpenAI trained ChatGPT

2026-05-06
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose development and use led to violations of Canadian privacy laws, a breach of legal obligations protecting fundamental rights. The investigation confirms that personal data was collected and used without consent, constituting realized harm. The involvement of OpenAI's AI system in causing these harms is direct and central. Although OpenAI has taken remedial steps, the primary event is the privacy violation itself, which is an AI Incident under the framework. The mention of related lawsuits and calls for regulation further contextualize the harm but do not change the classification.
Thumbnail Image

Report On Openai Expected From Federal, Provincial Privacy Watchdogs

2026-05-06
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and concerns about privacy, which is a human rights issue. However, it only mentions an investigation and an upcoming report without indicating that any harm has occurred yet. Therefore, it does not qualify as an AI Incident. Instead, it provides information about governance and regulatory responses to AI-related privacy concerns, which fits the definition of Complementary Information.
Thumbnail Image

Protection de la vie privée | OpenAI a manqué à ses obligations, selon des commissaires canadiens

2026-05-06
La Presse.ca
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use led to violations of privacy rights, a breach of applicable law protecting fundamental rights. The privacy breaches and risks of discrimination constitute harm to individuals' rights, fulfilling the criteria for an AI Incident. The event is not merely a future risk or a general update but documents realized harm and legal non-compliance. Therefore, it qualifies as an AI Incident.
Thumbnail Image

OpenAI a enfreint des lois avec ChatGPT

2026-05-06
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that OpenAI's ChatGPT, an AI system, has been found to have violated privacy laws through excessive data collection. This is a direct legal violation related to the use of an AI system, fitting the definition of an AI Incident under violations of human rights or breach of legal obligations protecting fundamental rights (privacy).
Thumbnail Image

OpenAI a brisé les lois canadiennes concernant la vie privée des usagers

2026-05-06
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
OpenAI's ChatGPT is an AI system that collects and uses personal data for training its generative AI models. The privacy commissioners found that this data collection was excessive and violated Canadian privacy laws, which are legal frameworks protecting fundamental rights. This is a direct legal violation linked to the AI system's development and use, meeting the criteria for an AI Incident under violations of human rights or breach of legal obligations. The event reports realized harm (legal violation) rather than potential harm, so it is not a hazard or complementary information.
Thumbnail Image

ChatGPT a violé plusieurs lois canadiennes sur la protection des...

2026-05-06
Le Devoir
Why's our monitor labelling this an incident or hazard?
The article explicitly states that ChatGPT, an AI system, was developed and deployed in ways that violated Canadian and provincial privacy laws, including failure to obtain proper consent and inadequate data handling procedures. These violations represent breaches of fundamental rights protected by law, fulfilling the criteria for an AI Incident under the framework. The harms are realized, not just potential, as the investigation confirms legal breaches and risks of discrimination and misinformation caused by the AI's outputs. The involvement of the AI system in these harms is direct and central to the incident.
Thumbnail Image

ChatGPT a violé plusieurs lois canadiennes sur la vie privée

2026-05-07
Le Devoir
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, whose development and use have directly led to violations of privacy laws in Canada and Quebec, which are legal obligations protecting fundamental rights. The investigation found that ChatGPT collected excessive personal information without clear consent and failed to warn users about factual inaccuracies, leading to risks of harm such as discrimination. These constitute realized harms under the definition of AI Incident (violations of human rights and legal obligations). The remedial actions by OpenAI are responses to the incident, not the primary event. Hence, this is an AI Incident.
Thumbnail Image

OpenAI n'a pas respecté la loi sur les données personnelles

2026-05-06
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose development and use led to violations of data protection laws, exposing individuals to privacy harms and discrimination risks. The involvement of AI in collecting and processing personal data without proper consent and safeguards directly caused legal breaches and harm to individuals' rights. Although no sanctions were imposed, the harm and legal violations have materialized, qualifying this as an AI Incident under the framework's criteria for violations of human rights and legal obligations.
Thumbnail Image

OpenAI n'a pas respecté la loi sur les données personnelles

2026-05-06
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and documents direct violations of data protection laws, which are legal frameworks intended to protect fundamental rights, including privacy. The commissioners found that OpenAI's practices led to harms such as privacy breaches and risks of discrimination, fulfilling the criteria for an AI Incident under the OECD framework. Although no sanctions were imposed, the harm and legal violations have occurred. Therefore, this is an AI Incident due to the realized harm and breach of legal obligations related to AI system use and development.
Thumbnail Image

Vers une règlementation plus stricte de l'IA en Alberta?

2026-05-06
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its use leading to violations of data privacy laws, which are a breach of legal obligations protecting fundamental rights. The investigation's conclusion that personal data was collected without valid consent indicates realized harm related to privacy rights. The discussion centers on regulatory responses to these harms. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and violations of privacy rights.
Thumbnail Image

Résultats de l'enquête conjointe sur OpenAI - ChatGPT en matière de protection des renseignements personnels

2026-05-06
Lanauweb
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns its development and use related to personal data handling. The investigation reveals non-compliance with privacy laws, which implicates violations of fundamental rights if unaddressed. However, the article does not report actual harm or incidents caused by the AI system but focuses on regulatory findings and recommendations to improve compliance and protect privacy. This fits the definition of Complementary Information, as it details governance responses and ongoing assessment of AI impacts rather than a new AI Incident or AI Hazard.
Thumbnail Image

In the news today: ChatGPT privacy, OPP funeral, Alert Ready test, Bon Cop, Bad Cop

2026-05-06
thepeterboroughexaminer.com
Why's our monitor labelling this an incident or hazard?
The article mentions an investigation and upcoming report by privacy watchdogs on OpenAI's ChatGPT regarding privacy issues, which is a governance response to AI-related concerns. There is no description of an AI incident causing harm or an AI hazard posing plausible future harm. The other news items are unrelated to AI. Hence, the article fits the definition of Complementary Information, providing supporting context on AI governance and privacy oversight without reporting a new AI Incident or AI Hazard.
Thumbnail Image

In the news today: ChatGPT privacy, OPP funeral, Alert Ready test, Bon Cop, Bad Cop

2026-05-06
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The investigation into privacy concerns involving AI technology indicates a focus on potential issues but does not describe any actual harm or incident caused by an AI system. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about ongoing oversight and regulatory attention to AI privacy issues. The rest of the news items do not involve AI systems or related harms.
Thumbnail Image

In the news today: ChatGPT privacy, OPP funeral, Alert Ready test, Bon Cop, Bad Cop

2026-05-06
Medicine Hat News
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report on any plausible future harm directly resulting from AI system development or use. Instead, it focuses on the planned release of a privacy watchdog report, which is a societal and governance response to AI privacy issues. The other news items are unrelated to AI. Therefore, the article fits the category of Complementary Information as it provides context and updates on AI governance and oversight without reporting a new AI Incident or AI Hazard.
Thumbnail Image

In the news today: ChatGPT privacy, OPP funeral, Alert Ready test, Bon Cop, Bad Cop

2026-05-06
Brandon Sun
Why's our monitor labelling this an incident or hazard?
The article centers on a privacy watchdog report about OpenAI's practices, which is a societal and governance response to AI privacy issues, fitting the definition of Complementary Information. There is no description of an AI system causing direct or indirect harm, nor a plausible future harm event. The other news items do not involve AI systems or AI-related harms. Therefore, the event is best classified as Complementary Information.