Ars Technica Fires Reporter After AI-Generated Fabricated Quotes Published

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ars Technica fired senior AI reporter Benj Edwards after an article he co-authored included fabricated quotes generated by an AI tool and attributed to a real person. The incident led to the article's retraction, public apology, and raised concerns about accountability and editorial standards in AI-assisted journalism.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes a clear case where an AI system (an experimental Claude Code-based AI tool and ChatGPT) was used in the process of generating content, which resulted in fabricated quotes attributed to a real person. This caused reputational harm and a breach of editorial standards, leading to the article's retraction and the reporter's termination. The AI system's malfunction (hallucination) directly contributed to the harm. Although the harm is non-physical, reputational and ethical harms fall under violations of rights and harm to communities as per the framework. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
WorkersOther

Harm types
ReputationalEconomic/Property

Severity
AI incident

Business function:
Other

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes

2026-03-03
Futurism
Why's our monitor labelling this an incident or hazard?
The event describes a clear case where an AI system (an experimental Claude Code-based AI tool and ChatGPT) was used in the process of generating content, which resulted in fabricated quotes attributed to a real person. This caused reputational harm and a breach of editorial standards, leading to the article's retraction and the reporter's termination. The AI system's malfunction (hallucination) directly contributed to the harm. Although the harm is non-physical, reputational and ethical harms fall under violations of rights and harm to communities as per the framework. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ars Technica Fires Reporter Over AI-Generated Quotes

2026-03-03
TheWrap
Why's our monitor labelling this an incident or hazard?
The AI system (an AI language model) was used in the development of the article and produced fabricated quotes (hallucinations). This led to misinformation being published, which is a form of harm to the community's trust and information integrity. Since the harm has already occurred (publication of false information and subsequent retraction), this qualifies as an AI Incident. The harm is indirect but clearly linked to the AI system's malfunction and the reporter's reliance on it without proper verification. The event does not fit the criteria for AI Hazard or Complementary Information, as the harm is realized and significant. It is not unrelated because AI was central to the incident.
Thumbnail Image

When AI Tools Yield Bad Journalism, Who Is Held Accountable?

2026-03-04
Jezebel
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI tools in journalism that generated fabricated quotes and false information, which were published and later retracted. This caused harm to the credibility of the media outlet and misinformation to the public, which is a harm to communities. The AI system's malfunction (hallucination) directly led to this harm. The firing of the reporter and the retraction indicate that the harm has materialized. Hence, this event meets the criteria for an AI Incident.