CNET Downgraded by Wikipedia After AI-Generated Content Scandal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

CNET published dozens of AI-generated articles containing errors and plagiarism, leading to widespread criticism and a downgrade of its reliability rating on Wikipedia. The incident highlights the risks of using generative AI in journalism, resulting in reputational harm, misinformation, and loss of trust in the publication.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate content that contained errors and plagiarism, which constitutes a violation of intellectual property rights and harms the reliability of information sources, impacting communities relying on accurate information. Since the AI-generated content caused realized harm (misinformation, plagiarism), this qualifies as an AI Incident under violations of intellectual property rights and harm to communities.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Wikipedia no longer considers CNET a "generally reliable" source after "AI" scandal

2024-03-01
OSNews: Exploring the Future of Computing
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate content that contained errors and plagiarism, which constitutes a violation of intellectual property rights and harms the reliability of information sources, impacting communities relying on accurate information. Since the AI-generated content caused realized harm (misinformation, plagiarism), this qualifies as an AI Incident under violations of intellectual property rights and harm to communities.
Thumbnail Image

AI-generated articles have no place on the web. Wikipedia and Google consider complacent publications generally unreliable in seismic shift.

2024-03-01
Windows Central
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate articles that contained serious grammar issues, plagiarism, and inaccurate information. This has directly harmed the publication's reputation and its rating as a reliable source on Wikipedia, which is a harm to communities and a violation of the right to access reliable information. The AI system's outputs caused these harms, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use is central to the incident.
Thumbnail Image

CNET's AI-generated news experiment leads to Wikipedia downgrading its reliability rating

2024-03-01
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to produce news articles that contained errors and plagiarized content, which led to a downgrade of CNET's reliability rating on Wikipedia. This demonstrates direct harm caused by the AI system's outputs, including misinformation and intellectual property violations. The harm is realized and ongoing in terms of reputational damage and misinformation spread. Therefore, this event meets the criteria for an AI Incident due to violations of intellectual property rights and harm to communities through misinformation.
Thumbnail Image

Wikipedia downgrades CNET's reliability rating after AI-generated articles

2024-02-29
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated articles published by CNET were riddled with plagiarism and mistakes, leading to a downgrade of the site's reliability rating by Wikipedia editors. This indicates that the AI system's use in content creation caused harm by disseminating inaccurate information, which is a form of harm to communities and a violation of trust. The reputational damage and misinformation are realized harms linked directly to the AI system's outputs. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wikipedia No Longer Considers CNET a "Generally Reliable" Source After AI Scandal

2024-02-29
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated articles were published with errors and plagiarism, which harmed the reliability of CNET as a source. This misinformation and loss of trust constitute harm to communities and violate editorial and intellectual property standards. The AI system's deployment directly caused these harms, fulfilling the criteria for an AI Incident. Although the harm is non-physical, it is significant and clearly articulated, involving misinformation and reputational damage. The event is not merely a potential risk or a complementary update but a realized incident of harm caused by AI use.
Thumbnail Image

Prominent Tech News Outlet Faces Backlash Over Use of AI-Generated Articles

2024-03-01
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create articles that contained factual inaccuracies and plagiarism, which harmed the credibility and reliability of the news outlet. This constitutes a violation of intellectual property rights and harms the community's trust in information sources, fitting the definition of an AI Incident. The harm is realized, not just potential, and the AI system's use is pivotal to the incident. The event does not describe a plausible future harm or a governance response primarily, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI-generated content is central to the controversy and harm.