ChatGPT Generates Defamatory Falsehoods Leading to GDPR Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Privacy group Noyb, led by Max Schrems, has filed complaints against OpenAI after ChatGPT generated false, defamatory information about a Norwegian citizen, accusing him of child homicide. The incident, raising serious GDPR and reputational concerns, underscores ethical and legal challenges in AI data accuracy and accountability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves ChatGPT, an AI system, which generated false and harmful content about an individual, directly causing reputational harm and legal complaints. This fits the definition of an AI Incident as the AI system's use has directly led to harm, specifically a violation of personal rights and dissemination of false information. The complaint also highlights issues of data protection and transparency, reinforcing the harm caused. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingConsumer servicesGovernment, security, and defence

Affected stakeholders
General public

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

"ChatGPT mi accusa di aver ucciso i miei figli, dice che sono stato condannato a 21 anni di carcere ma è tutto falso": la denuncia choc contro OpenAI - Il Fatto Quotidiano

2025-03-24
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, which generated false and harmful content about an individual, directly causing reputational harm and legal complaints. This fits the definition of an AI Incident as the AI system's use has directly led to harm, specifically a violation of personal rights and dissemination of false information. The complaint also highlights issues of data protection and transparency, reinforcing the harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

"ChatGPT ha detto che ho ucciso i miei figli": la sorpresa di Arve davanti allo schermo

2025-03-21
Stile e Trend Fanpage
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) generating false and harmful content about a real person, which has caused reputational harm and a legal complaint. The AI's inaccurate output directly led to harm to the individual's rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of legal protections. The complaint and the harm are materialized, not just potential, so this is not a hazard or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

È innocente, ma per ChatGPT ha ucciso i suoi due figli. La storia dell'uomo che ha querelato OpenAI per diffamazione

2025-03-21
Open
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used and produced false information that directly harmed the individual's reputation, a violation of personal rights. This harm has already occurred as the individual has taken legal action against OpenAI for defamation. Therefore, this qualifies as an AI Incident because the AI's use has directly led to harm (violation of rights).
Thumbnail Image

ChatGPT accusato di diffamazione in Europa

2025-03-20
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose use has directly led to harm in the form of defamation and violation of personal data rights. The harm is realized, not just potential, as the false information has been generated and caused reputational damage, prompting legal action. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of fundamental rights and legal obligations, with significant consequences including potential fines and regulatory impact.
Thumbnail Image

Denuncia da un gruppo per la privacy dopo che ChatGPT ha inventato una storia "diffamatoria" sull'omicidio di un bambino

2025-03-20
euractiv.it
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI language model, generated a false story about a user involving a serious crime, mixing real personal data with fabricated content. This constitutes a violation of personal rights and privacy, which is a harm under the framework. The complaint to the data protection authority and the discussion of GDPR non-compliance further support that the AI system's outputs have caused harm. The event is not merely a potential risk but an actual incident of harm caused by the AI's use, thus classifying it as an AI Incident.
Thumbnail Image

Allucinazioni di ChatGPT: noyb denuncia OpenAI

2025-03-20
Punto Informatico
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, produced a false and defamatory statement about a Norwegian citizen, which is a direct harm to the individual's privacy and reputation, violating GDPR. The complaint filed by noyb highlights this harm caused by the AI's hallucination. Since the harm has already occurred and is linked directly to the AI system's output, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT ha accusato ingiustamente un uomo di aver ucciso i propri figli

2025-03-21
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use directly caused harm to an individual by falsely accusing him of filicide, which is defamatory and damaging to his reputation. This is a clear example of an AI Incident because the AI's hallucination led to actual harm (defamation and violation of data protection laws). The legal complaint and the update to the AI system are responses to this incident but do not negate the fact that harm occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

Nhiều người dùng kiện ChatGPT vì trả lời sai, thậm chí vu khống giết người

2025-03-21
Vietnam+ (VietnamPlus)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) generating false and defamatory content about real individuals, which has led to reputational harm and legal complaints. This fits the definition of an AI Incident because the AI's use has directly led to violations of rights and harm to persons. The harm is realized, not just potential, as complaints and legal actions are underway. The AI system's malfunction or misuse in generating false information is central to the event.
Thumbnail Image

OpenAI đối mặt với khiếu nại về thông tin sai lệch trên ChatGPT

2025-03-21
bnews.vn
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, has produced false and defamatory content about an individual, which is a direct violation of data protection regulations requiring accuracy of personal data. The complaint to the Norwegian Data Protection Authority and the involvement of the privacy advocacy group Noyb confirm that harm has occurred in the form of misinformation and reputational damage. The AI system's role in generating this misinformation is pivotal, fulfilling the criteria for an AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

ChatGPT 'vu khống' rằng người đàn ông đã giết hai con trai - BBC News Tiếng Việt

2025-03-22
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) generating false information about a person, which directly harmed his reputation and caused distress. The harm is realized and significant, involving violation of personal rights and defamation. The AI system's hallucination is the direct cause of this harm. The complaint to data protection authorities and the discussion of the AI's role in producing false data confirm the AI system's involvement in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chat GPT bịa đặt thông tin, OpenAI đối mặt với khiếu nại quyền riêng tư

2025-03-20
cafef
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) that generated false and damaging personal information about a person, causing harm to their reputation and violating data protection laws (GDPR). This is a direct harm linked to the AI system's outputs and its failure to provide mechanisms for correction, thus breaching legal obligations protecting fundamental rights. The complaint and regulatory actions indicate realized harm, not just potential risk. Therefore, this qualifies as an AI Incident due to violation of rights and legal obligations caused by the AI system's use.
Thumbnail Image

ChatGPT bị kiện vì tung tin giả

2025-03-23
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) that generated false content about a person, which is a direct misuse or malfunction of the AI system's outputs causing harm to the individual's reputation and potentially other rights. The harm is realized, as the individual has filed a complaint citing the negative impact on his life. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

ChatGPT có thể khiến người sử dụng nhiều cảm thấy cô đơn

2025-03-24
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use by people, with research indicating that its use can be associated with increased loneliness and social isolation in some users. This constitutes a potential psychological harm linked to AI use. However, the article does not describe a concrete event of harm occurring to specific individuals or groups, but rather a study revealing possible negative effects. Therefore, this fits best as Complementary Information, as it provides important contextual and research insights into AI's societal impacts without reporting a discrete AI Incident or an imminent AI Hazard.
Thumbnail Image

AI "âm chiếm" trường học: Nhiều học sinh ngày càng lười tư duy bài tập về nhà

2025-03-24
Kenh14.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Claude AI, Gemini) being used by students to complete homework and study tasks. The use of AI has led to students becoming overly reliant on AI-generated answers, resulting in reduced critical thinking and creativity, which is a form of harm to individuals (students) and communities (educational community). Additionally, concerns about uncredited AI-generated content raise intellectual property rights issues. These harms are realized and directly linked to AI use, fitting the definition of an AI Incident. Although no physical injury or infrastructure disruption is involved, the cognitive and educational harms and rights violations are significant and clearly articulated.
Thumbnail Image

Dùng nhiều ChatGPT có thể cảm thấy cô đơn

2025-03-24
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose frequent use has been shown through research to directly contribute to increased feelings of loneliness and social isolation, which are harms to health. The article describes actual harm occurring as a result of the AI system's use, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.