CNET Lays Off Staff After AI-Generated Articles Cause Errors and Plagiarism Scandal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

CNET, owned by Red Ventures, laid off about 10% of its staff after using AI to generate articles that were found to contain factual errors and plagiarism. The incident led to public backlash, leadership changes, and raised concerns about misinformation and journalistic integrity due to AI-generated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system was used to generate articles that contained errors, leading to misinformation and editorial harm. This constitutes harm to communities through the dissemination of inaccurate information, which fits the definition of an AI Incident. The event involves the use of an AI system whose outputs directly led to harm (misinformation and editorial errors). Although the layoffs are unrelated, the core issue is the harm caused by the AI-generated content. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
WorkersGeneral publicBusiness

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

CNET is laying off 10% of its staff, weeks after reports of it using AI to write articles -- but it says the 2 things aren't linked

2023-03-03
Business Insider
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate articles that contained errors, leading to misinformation and editorial harm. This constitutes harm to communities through the dissemination of inaccurate information, which fits the definition of an AI Incident. The event involves the use of an AI system whose outputs directly led to harm (misinformation and editorial errors). Although the layoffs are unrelated, the core issue is the harm caused by the AI-generated content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

CNET is doing big layoffs just weeks after AI-generated stories came to light

2023-03-03
Culver City Observer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to generate news articles, which caused factual errors and public backlash, indicating harm related to misinformation and editorial integrity. This constitutes harm to communities and a breach of journalistic ethics, which falls under violations of rights and harm to communities. The layoffs and strategic shifts are consequences of this AI use. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation and ethical breaches in journalism.
Thumbnail Image

CNET begins big layoffs weeks after AI-generated stories came to light: Report

2023-03-03
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated articles and the resulting organizational changes at CNET, including layoffs and leadership shifts. However, it does not describe any direct or indirect harm caused by the AI system, such as misinformation causing community harm, rights violations, or physical injury. The layoffs are a business consequence rather than a harm caused by AI. The mention of plans to redeploy AI tools is prospective but lacks detail on plausible harm. Thus, the event fits the definition of Complementary Information, as it updates on AI use and responses without reporting a new incident or hazard.
Thumbnail Image

CNET Hits Staff With Layoffs After Disastrous Pivot to AI Journalism

2023-03-02
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate articles that contained errors and plagiarism, which constitutes harm to intellectual property rights and harm to communities through misinformation. The AI system's outputs directly led to these harms, fulfilling the criteria for an AI Incident. The layoffs are a downstream consequence of these harms. Although the article does not detail physical harm or legal rulings, the harms to intellectual property and journalistic integrity are significant and clearly articulated. Hence, the event is best classified as an AI Incident.
Thumbnail Image

CNET Says It's a Total Coincidence It's Laying Off Humans After Publishing AI-Generated Articles

2023-03-03
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI-generated articles were published, which is an AI system's use. The layoffs of human writers following the deployment of AI content indicate indirect harm to employment, a form of harm to groups of people. Additionally, the mention of plagiarism in AI articles suggests violations of intellectual property rights. These harms have already occurred, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

CNET's Post-AI Layoffs Apparently Gutted 50 Percent of Its News and Video Staff

2023-03-03
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for content generation that was intended to assist journalists but apparently led to large-scale layoffs and a diminished editorial capacity. This reduction in human oversight and expertise can lead to lower quality or inaccurate information being published, which harms the community's right to reliable information. The harm is indirect but clearly linked to the AI system's deployment and its impact on staffing and content quality. Therefore, this qualifies as an AI Incident due to violation of community informational rights and harm to communities through degraded news quality.
Thumbnail Image

CNET Lays Off 10% of Staff Just Weeks After Launching Articles Written by AI

2023-03-03
TheWrap
Why's our monitor labelling this an incident or hazard?
The article mentions AI-generated content and staff layoffs but does not describe any harm resulting from the AI system's development, use, or malfunction. The layoffs are a business decision linked to AI adoption but do not constitute harm to individuals or communities as defined. There is no plausible future harm described either. Therefore, this is general AI-related news about adoption and organizational change, which fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Charlotte investment firm cuts jobs at CNET news site after AI-produced stories | WRAL TechWire

2023-03-03
WRAL TechWire
Why's our monitor labelling this an incident or hazard?
While AI systems were used to produce news articles, the article does not report any direct or indirect harm resulting from the AI-generated content, such as misinformation, rights violations, or other harms defined in the framework. The layoffs and leadership changes are a business response to the use of AI, not an AI Incident or Hazard. Therefore, this is Complementary Information providing context on societal and organizational responses to AI use in media.
Thumbnail Image

How a brand like CNET is using AI to replace writers and doing layoffs? - TechnoSports

2023-03-03
technosports.co.in
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated articles being published without disclosure, leading to factual inaccuracies and public complaints. This use of AI directly affected the quality and reliability of information, harming the community's right to accurate information and violating journalistic ethics. The layoffs and editorial shifts are direct consequences of this AI use. The AI system's role is pivotal in causing these harms, meeting the criteria for an AI Incident under violations of rights and harm to communities.