Senior Journalist Suspended for Publishing AI-Generated Fake Quotes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Peter Vandermeersch, a senior journalist at Mediahuis, was suspended after admitting to publishing newsletters containing AI-generated fake quotes. He relied on language models like ChatGPT and Perplexity without proper verification, resulting in misinformation and violating journalistic standards. The incident affected outlets in Ireland and the Netherlands.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (language models like ChatGPT, Perplexity, Google Notebook) was explicitly used in content creation. The AI's hallucinations caused the journalist to publish false quotes, which is misinformation harming the public's right to truthful information and trust in media. This constitutes a violation of rights and harm to communities. The harm has already occurred as articles with fabricated quotes were published and later removed. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Mediahuis suspends top journalist after admission of using false AI material

2026-03-19
The Irish Times
Why's our monitor labelling this an incident or hazard?
An AI system (language models like ChatGPT, Perplexity, Google Notebook) was explicitly used in content creation. The AI's hallucinations caused the journalist to publish false quotes, which is misinformation harming the public's right to truthful information and trust in media. This constitutes a violation of rights and harm to communities. The harm has already occurred as articles with fabricated quotes were published and later removed. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Mediahuis suspends senior journalist over AI-generated quotes

2026-03-20
The Guardian
Why's our monitor labelling this an incident or hazard?
An AI system (language models like ChatGPT, Perplexity, and Google's NotebookLM) was explicitly used to generate content (quotes) that were false and published without proper human oversight. This misuse directly led to harm in the form of misinformation and damage to journalistic credibility, which affects communities and violates ethical standards. The event describes realized harm caused by the AI system's outputs and the journalist's failure to verify them, fitting the definition of an AI Incident.
Thumbnail Image

Mediahuis suspends former Irish boss over use of AI

2026-03-19
RTE.ie
Why's our monitor labelling this an incident or hazard?
The involvement of AI language models (ChatGPT, Perplexity, Google Notebook) in generating fabricated quotes that were published and attributed falsely to individuals directly led to misinformation and a breach of journalistic integrity. This constitutes a violation of obligations intended to protect rights to truthful information and can harm communities by spreading falsehoods. The harm has already occurred, as the fabricated quotes were published and confirmed false by the quoted individuals. The suspension and public admission confirm the seriousness of the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mediahuis suspends Peter Vandermeersch after former Irish boss admits misuse of AI in new role

2026-03-19
Irish Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) used in content generation that directly led to the publication of fabricated quotes, causing misinformation and reputational harm. The misuse and lack of human oversight in verifying AI outputs resulted in harm to the community's trust in journalism and to individuals misquoted. This meets the criteria for an AI Incident because the AI system's use directly led to harm (misinformation and violation of rights).
Thumbnail Image

Adrian Weckler: What AI 'hallucinations' are and how they can fool us with made-up answers

2026-03-21
Irish Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) that produced made-up quotes used in published articles, causing misinformation and harm to the integrity of journalism. This is a direct harm caused by the AI system's outputs, fitting the definition of an AI Incident due to harm to communities and violation of rights related to truthful information dissemination.
Thumbnail Image

Mediahuis suspends senior journalist over AI-generated quotes in newsletter

2026-03-20
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use and misuse of AI systems (language models like ChatGPT) in content creation, leading to the publication of false information. This misuse has caused harm to the integrity of journalism and trust with readers, which can be considered a violation of rights and harm to communities through misinformation. Since the harm has already occurred and is directly linked to the AI system's outputs and the journalist's reliance on them without adequate oversight, this qualifies as an AI Incident rather than a hazard or complementary information. The suspension of the journalist is a response to this incident, but the main event is the AI-related harm caused by hallucinated quotes.
Thumbnail Image

Mediahaus suspends senior journalist for using fabricated quotes produced by AI

2026-03-19
TheJournal.ie
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (LLMs) whose outputs were fabricated quotes that were published as factual, causing misinformation. This constitutes a direct harm to communities by spreading false information and undermining trust in media, which aligns with harm to communities under the AI Incident definition. The journalist's reliance on AI without adequate verification and the resulting publication of fabricated content demonstrate misuse and failure in the use of AI systems. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Former NRC editor suspended for using AI quotes which are fake

2026-03-20
DutchNews.nl
Why's our monitor labelling this an incident or hazard?
The former editor used AI language models to write newsletters containing fake quotes, and a newspaper published an article with an AI-generated photo presented as a real person. Both cases involve AI-generated false content that was disseminated to the public, causing harm through misinformation and deception. The AI systems' outputs directly led to violations of journalistic standards and harm to the community's right to accurate information, fitting the definition of an AI Incident.
Thumbnail Image

GeenStijl: LOL. NRC betrapt "eigen" Peter Vandermeersch op gebruik van door AI gehallucineerde citaten

2026-03-19
GeenStijl.nl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate fabricated quotes, which were published and subsequently exposed, indicating misuse of AI-generated content. This misuse has directly led to harm in the form of misinformation and breach of trust in journalism, which falls under violations of rights and harm to communities. The AI system's role is pivotal as the fabricated quotes were AI-generated. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Voormalig hoofdredacteur Peter Vandermeersch tijdelijk geschorst door Mediahuis: blogposts bevatten door AI verzonnen citaten

2026-03-19
De Standaard
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used in content generation that produced false information (hallucinated quotes). This misinformation was disseminated publicly, causing harm to the integrity of journalism and potentially misleading readers. The harm is realized, not just potential, and the AI system's malfunction (hallucination) is a direct contributing factor. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under harm to communities and breach of obligations intended to protect fundamental rights (here, journalistic ethics and transparency).
Thumbnail Image

Peter Vandermeersch op non-actief na gebruik verzonnen AI-citaten

2026-03-19
Nederlands Dagblad
Why's our monitor labelling this an incident or hazard?
The use of AI language models to generate fabricated quotes constitutes an AI system malfunction leading to harm in the form of misinformation and reputational damage. The event directly involves AI use and its erroneous outputs causing harm, fitting the definition of an AI Incident. Although the harm is non-physical, it affects rights related to truthful information and journalistic standards, which aligns with violations of rights under the framework.
Thumbnail Image

Peter Vandermeersch op non-actief na gebruik verzonnen AI-citaten

2026-03-19
Welingelichte Kringen
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use and malfunction of AI systems (language models) that generated fabricated quotes, which were then published and caused misinformation. This misinformation harms the community's access to truthful information and undermines journalistic integrity, fitting the definition of harm to communities and violation of rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs and the resulting consequences (suspension, public trust issues).
Thumbnail Image

Mediahuis schorst Vandermeersch na gebruik AI met verzonnen citaten in nieuwsbrieven

2026-03-19
FOK!
Why's our monitor labelling this an incident or hazard?
An AI system (language models) was explicitly used to generate content, including fabricated quotes, which were published and later acknowledged as incorrect. This misuse of AI led to harm in the form of misinformation and reputational damage, fulfilling the criteria for harm to communities and violation of rights (ethical standards in journalism). The event is not merely a potential risk but a realized harm caused by AI use, thus it is an AI Incident.
Thumbnail Image

Peter Vandermeersch op non-actief na gebruik verzonnen AI-citaten

2026-03-19
RTL.nl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI language models (AI systems) in content creation. The AI-generated fabricated quotes caused misinformation and a breach of journalistic integrity, which is a violation of ethical standards and can be considered harm to communities or rights (e.g., right to truthful information). The journalist's suspension is a direct consequence of this harm. Thus, the AI system's use directly led to harm, qualifying this as an AI Incident rather than a hazard or complementary information.