ChatGPT Search Hallucinates and Misattributes News Sources

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by Columbia’s Tow Center found ChatGPT Search misattributes, fabricates and plagiarizes news citations in over one-third of cases, erroneously sourcing quotes blocked by robots.txt and partner publications, offering confidently wrong answers. The tool’s hallucinations and sourcing errors pose misinformation risks and potential intellectual property infringements.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (ChatGPT search) that generates outputs influencing how users receive news information. The inaccuracies and fabrications in attribution, including plagiarism, directly harm the integrity of news dissemination and violate intellectual property rights. These harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to direct harm to communities (misinformation, erosion of trust) and violation of intellectual property rights.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
ReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

ChatGPT's search results for news are 'unpredictable' and frequently inaccurate

2024-12-03
The Verge
Why's our monitor labelling this an incident or hazard?
The article discusses the performance issues of an AI system (ChatGPT search) in providing accurate information, highlighting frequent inaccuracies and misattributions. While these inaccuracies can contribute to misinformation, the article does not document actual harm occurring (e.g., injury, rights violations, or community harm). The AI system's involvement is in its use, but the harm is potential rather than realized. The article also includes OpenAI's response to improve the system, which aligns with providing complementary information about AI system performance and governance. Hence, it does not meet the criteria for an AI Incident or AI Hazard but fits Complementary Information.
Thumbnail Image

ChatGPT Is Absolutely Butchering Reporting From Its "News Partners"

2024-12-02
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT search) that generates outputs influencing how users receive news information. The inaccuracies and fabrications in attribution, including plagiarism, directly harm the integrity of news dissemination and violate intellectual property rights. These harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to direct harm to communities (misinformation, erosion of trust) and violation of intellectual property rights.
Thumbnail Image

ChatGPT Misattributes Article Sources, Study Finds

2024-12-02
MediaPost
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating outputs that misattribute news sources, leading to misinformation and potential harm to publishers' intellectual property rights and reputations. The study documents that ChatGPT often provides incorrect source attributions rather than acknowledging uncertainty, which directly harms the publishers by misrepresenting their content. This harm is realized and directly linked to the AI system's outputs, fitting the definition of an AI Incident involving violation of intellectual property rights and harm to communities (publishers).
Thumbnail Image

Study Shows ChatGPT Struggles With Accurate News Citations

2024-11-30
Techopedia.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) whose use leads to misrepresentation and inaccurate citations of publisher content. While this could potentially harm publishers' intellectual property rights or mislead users, the article does not document any actual harm or legal rulings confirming such harm has occurred. The focus is on the study's findings and OpenAI's response to improve the system. Therefore, this event is best classified as Complementary Information, as it provides important context and updates about AI system performance and related concerns without describing a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Even when OpenAI has deals with publishers to use/cite content, its ChatGPT bot screws up the citations

2024-11-30
Democratic Underground
Why's our monitor labelling this an incident or hazard?
ChatGPT is a generative AI system that produces content including citations. The study documents numerous instances where ChatGPT inaccurately cites publisher content, sometimes fabricating citations. This misrepresentation directly harms publishers by violating their intellectual property rights and potentially misleading users, which fits the definition of an AI Incident under violations of intellectual property rights and harm to communities. The harm is realized, not just potential, as the inaccurate citations are actively produced and disseminated by the AI system.
Thumbnail Image

Study of ChatGPT citations makes dismal reading for publishers - RocketNews

2024-11-29
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating content and citations. The study reveals that its use leads to inaccurate or fabricated citations, which constitutes a violation of intellectual property rights and harms publishers' interests. Since the AI system's outputs have directly led to misrepresentation and potential harm to publishers' rights, this qualifies as an AI Incident under the framework, specifically under harm category (c) regarding violations of intellectual property rights.
Thumbnail Image

Study of ChatGPT citations makes dismal reading for publishers | TechCrunch

2024-11-29
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, a generative AI system, whose use in producing citations has directly led to harm by misrepresenting publishers' content, causing reputational damage and potentially encouraging plagiarism. The harm is realized and documented by the study, fulfilling the criteria for an AI Incident. The AI system's malfunction or limitations in citation accuracy have directly contributed to these harms, including violations of intellectual property rights and harm to communities through misinformation. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ChatGPT Search Results Cannot Be Trusted

2024-12-04
The How-To Geek
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT search) whose outputs have directly caused harm by misattributing sources, fabricating information, and plagiarizing copyrighted content. These actions constitute violations of intellectual property rights and harm to communities through misinformation. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Researchers call ChatGPT Search answers 'confidently wrong'

2024-12-03
Digital Trends
Why's our monitor labelling this an incident or hazard?
ChatGPT Search is an AI system involved in generating search answers. The study documents its frequent incorrect responses and misattributions, which directly harm the reputations of publishers (a form of harm to intellectual property rights and communities). The AI's confident but wrong answers mislead users, causing reputational and informational harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction.
Thumbnail Image

ChatGPT search can't find the real news, even with a publisher holding its hand

2024-12-05
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in news search and summarization, with clear issues in accuracy and hallucination. However, the article does not report any realized harm such as injury, rights violations, or significant community harm. It rather highlights potential risks and challenges in trust and reliability, which are important but do not constitute a direct or indirect AI Incident or a plausible AI Hazard as defined. The article serves as a critical evaluation and contextual information about AI's current performance and its impact on journalism, fitting the definition of Complementary Information.
Thumbnail Image

The ChatGPT Search Engine Is Still Very "green": These Are The Problems It Presents - Bullfrag

2024-12-04
Bullfrag
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system involved in providing information and citations. The study highlights its failures in accuracy and source attribution, which can indirectly cause harm by misleading users and contributing to misinformation, a recognized harm to communities. However, the article does not report actual realized harm but warns of potential risks. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm through misinformation dissemination.
Thumbnail Image

ChatGPT Search Can't Find The Real News Even With A Publisher's Hand - Ny Breaking News

2024-12-05
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT search) whose use and malfunction (hallucination and misattribution) directly cause harm by spreading inaccurate information and undermining trust in journalism. This fits the definition of an AI Incident because the AI system's outputs have directly led to harm to communities (misinformation) and potentially violate rights related to accurate information and intellectual property (due to plagiarism concerns). The article describes realized harm rather than just potential risk, so it is not merely a hazard or complementary information.