Major Publications Retract AI-Generated Articles by Fake Journalist

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

At least six reputable outlets, including Wired and Business Insider, published and later retracted articles written by 'Margaux Blanchard,' a fictitious AI-generated persona. The incident spread misinformation and damaged trust in journalism, highlighting the risks of AI-generated content being passed off as authentic reporting.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes multiple news organizations publishing articles generated by an AI system under a fake author identity, which were later removed after the AI-generated nature was discovered. The AI system's outputs caused harm by spreading false information and misleading readers, which fits the definition of an AI Incident due to harm to communities and violation of journalistic and ethical standards. The involvement of AI in generating fabricated content that was published and then retracted confirms direct harm caused by AI use. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Wired and Business Insider remove articles by AI-generated 'freelancer'

2025-08-21
The Guardian
Why's our monitor labelling this an incident or hazard?
The event describes multiple news organizations publishing articles generated by an AI system under a fake author identity, which were later removed after the AI-generated nature was discovered. The AI system's outputs caused harm by spreading false information and misleading readers, which fits the definition of an AI Incident due to harm to communities and violation of journalistic and ethical standards. The involvement of AI in generating fabricated content that was published and then retracted confirms direct harm caused by AI use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Publications Including Wired, Business Insider Take Down Apparently Fake Articles by AI 'Freelance Writer'

2025-08-21
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (likely a large language model like ChatGPT) to generate fabricated articles that were published in multiple news outlets. The AI-generated content included fake quotes and unverifiable sources, leading to misinformation being spread to the public. This is a clear case where the AI system's use directly caused harm to communities by disseminating false information, which fits the definition of an AI Incident. The articles were removed after the deception was uncovered, but the harm had already occurred. Therefore, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Wired and Business Insider remove articles by AI-generated 'freelancer'

2025-08-21
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fabricated news articles that were published by reputable news outlets. The AI-generated content led to misinformation being spread, which harms the public's trust and the integrity of information, thus harming communities. The harm has already occurred as the articles were published and later removed upon discovery. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and breach of editorial standards).
Thumbnail Image

Wired and Business Insider Accidentally Published AI-Generated Slop Articles by Seemingly Fake Journalist

2025-08-21
Futurism
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating fabricated articles that were published and later retracted due to their AI origin and falsehoods. The harm is realized as misinformation spread and trust in media is damaged, which fits the definition of an AI Incident under harm to communities and violation of rights (right to truthful information). The AI system's use was central to the incident, as the articles were AI-generated and passed off as human-written, leading to the harm. This is not merely a potential hazard or complementary information but a concrete incident of AI misuse causing harm.
Thumbnail Image

'Wired' And 'Business Insider' Take Down AI-Generated Articles

2025-08-22
MediaPost
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate articles that were published under a false identity, leading to retractions and editorial corrections. While this involves AI misuse and raises concerns about misinformation and editorial standards, the article does not report any direct or indirect harm such as health injury, rights violations, or community harm. The focus is on the editorial process and the response to AI-generated content, which fits the definition of Complementary Information as it provides context and updates on managing AI-related challenges rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Wired and Business Insider remove articles by AI-generated 'freelancer'

2025-08-21
AOL.com
Why's our monitor labelling this an incident or hazard?
The event describes multiple news organizations publishing articles written by an AI-generated 'freelancer' that were later found to be fabricated and removed. The AI system was used in the development and use of false content, which directly led to harm in the form of misinformation and deception of readers. This meets the criteria for an AI Incident because the AI system's use directly caused harm to communities through the spread of false narratives. The removal of articles and editorial responses confirm the harm was realized, not just potential.
Thumbnail Image

The case of Margaux Blanchard: Publishers fall for AI-written articles

2025-08-22
Mumbrella
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate fabricated articles under a fake author identity, which were published by reputable media outlets. This caused misinformation and deception, harming the trustworthiness of journalism and misleading the public, which is a form of harm to communities. The AI system's outputs directly caused this harm. The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content. Hence, it fits the definition of an AI Incident.
Thumbnail Image

Wired, Business Insider delete phony articles allegedly written by AI...

2025-08-22
New York Post
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system to generate fabricated news articles that were published by multiple outlets, causing misinformation and deception. The AI system's outputs directly led to harm by spreading false narratives and fabricated sources, which impacts communities and violates ethical and legal standards. The publications' removal of the articles and enhanced verification protocols are responses to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated fabricated content.
Thumbnail Image

Big-name publications red-faced after publishing AI-made fake news

2025-08-25
dpa International
Why's our monitor labelling this an incident or hazard?
The articles generated by an AI freelancer contained fabricated content such as made-up towns and businesses, which were published and later had to be removed. This demonstrates that the AI system's use directly led to harm in the form of misinformation dissemination, which affects communities and public trust. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated fake news.
Thumbnail Image

فضيحة الذكاء الاصطناعي.. صحف تحذف مقالات بعد اكتشاف تزوير محتواها

2025-08-31
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated content that was published as factual news, which constitutes harm to communities by spreading misinformation. The AI's use directly led to the publication of false articles, which were later retracted. This fits the definition of an AI Incident because the AI system's use directly caused harm through misinformation dissemination, a form of harm to communities and violation of rights to accurate information.
Thumbnail Image

انتقادات حادة لمطبوعات كبرى بعد نشر مواد مزيفة أنتجها الذكاء الاصطناعي

2025-08-31
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fabricated journalistic content that was published and later retracted due to falsehoods. The AI system's outputs directly led to the dissemination of misinformation, which harms communities by misleading the public and damaging trust in media institutions. The harm is realized, not just potential, as the articles were published and then removed after detection of fabrication. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm to communities through misinformation.
Thumbnail Image

صحف عالمية تنشر مقالات مزيفة أنتجها الذكاء الاصطناعي

2025-08-31
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake news articles containing fabricated details, which were published by reputable media outlets before being retracted. The AI system's outputs directly caused misinformation to be disseminated, harming the public and media trust, which fits the definition of an AI Incident under harm to communities. The harm is realized, not just potential, as the articles were published and later removed due to AI-generated fabrications.
Thumbnail Image

فضيحة إعلامية.. الذكاء الاصطناعي يوقع صحفاً كبرى في فخ المقالات الوهمية

2025-08-31
24.ae
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate journalistic content that contained fabricated and false information, which was published by reputable media outlets. This led to reputational harm and misinformation, which qualifies as harm to communities. The AI system's use directly caused this harm, making this an AI Incident. The article details realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news because the core issue is the harm caused by AI-generated false content.
Thumbnail Image

"بيزنس إنسايدر" تحذف مقالات مكتوبة بالذكاء الاصطناعي

2025-08-31
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate journalistic articles that contained fabricated and false information, which were published and then had to be deleted after the deception was uncovered. The AI system's outputs directly led to misinformation being disseminated, harming the credibility of the news outlets and misleading the public. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation. The deletion and apologies are responses to the incident but do not negate the fact that harm occurred.
Thumbnail Image

"احتوت أخطاء صارخة".. مؤسسات صحفية تحذف تقارير بعد اكتشاف "فضيحة كتابتها بالذكاء الاصطناعي"

2025-08-31
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating journalistic content, which directly led to the publication of fabricated and erroneous information. This misinformation harms the credibility of the media institutions and misleads the public, constituting harm to communities. The deletion of articles is a response to the harm already caused. Hence, the AI system's use has directly led to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

انتقاد مطبوعات كبرى نشرت مواد مزيفة أنتجها الذكاء الاصطناعي

2025-08-31
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating false content that was published and later retracted due to fabrication and misinformation. The harm is realized as readers were exposed to false information, which can damage public trust and misinform communities. The AI system's role is pivotal as it directly produced the fabricated content. Hence, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to communities and rights violations.
Thumbnail Image

انتكاسة لسياسة تقليص عدد الصحافيين .. "الذكاء الاصطناعي" يحرج مجلات عالمية بعد تسريح محرريها

2025-08-31
كتابات
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to write fabricated articles containing false information that was published by reputable magazines and later had to be removed. This constitutes direct harm to communities through misinformation and breaches journalistic integrity, which is a violation of rights related to truthful information. The AI system's use in generating these false articles is central to the harm caused, meeting the criteria for an AI Incident.
Thumbnail Image

انتكاسة لغرف الأخبار.. الذكاء الاصطناعي يحرج مجلات عالمية بعد تسريح محرريها - شفق نيوز

2025-08-31
Shafaq News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to write articles containing fabricated details, which were published by reputable magazines and later had to be deleted after the falsehoods were discovered. This shows direct use of AI leading to harm in the form of misinformation and damage to the credibility of news organizations, which affects communities and public trust. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused by AI-generated misinformation that has occurred.
Thumbnail Image

مجلة تحذف مقالات مكتوبة بالذكاء الاصطناعي بعد تسريح موظفين - أخبار العصر

2025-08-31
أخبار العصر
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate content that directly led to the dissemination of false information, misleading readers and causing reputational harm to the publications. This constitutes a violation of trust and harms communities by spreading misinformation. The event involves the use and malfunction (inaccurate outputs) of AI systems, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as false articles were published and then retracted.
Thumbnail Image

مجلات عالمية تحذف مقالات كتبها الذكاء الاصطناعي

2025-09-01
almodon
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate journalistic content that contained fabricated and false information, which was published and then deleted after discovery. The AI system's use directly led to harm in the form of misinformation and deception affecting readers and the credibility of the publications. This fits the definition of an AI Incident because the AI's outputs caused harm to communities by spreading falsehoods and undermining trust, even if physical harm did not occur. The deletion and apology are responses to the incident, but the harm had already materialized.
Thumbnail Image

فضيحة مقالات مزيفة تُربك مؤسسات إعلامية كبرى بسبب الذكاء الاصطناعي

2025-09-01
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fabricated articles that were published by reputable media organizations. The AI-generated misinformation caused harm by misleading readers and damaging the credibility of these institutions, which constitutes harm to communities and a violation of trust. The harm is realized, not just potential, as articles had to be retracted and apologies issued. This fits the definition of an AI Incident where the use of AI systems directly led to significant harm.