
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Ars Technica fired senior AI reporter Benj Edwards after an article he co-authored included fabricated quotes generated by an AI tool and attributed to a real person. The incident led to the article's retraction, public apology, and raised concerns about accountability and editorial standards in AI-assisted journalism.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event describes a clear case where an AI system (an experimental Claude Code-based AI tool and ChatGPT) was used in the process of generating content, which resulted in fabricated quotes attributed to a real person. This caused reputational harm and a breach of editorial standards, leading to the article's retraction and the reporter's termination. The AI system's malfunction (hallucination) directly contributed to the harm. Although the harm is non-physical, reputational and ethical harms fall under violations of rights and harm to communities as per the framework. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]