Inaccurate AI News Summaries Spark Misinformation Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The BBC found that leading AI assistants, including ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity, produced inaccurate news summaries. Over half of the answers had significant errors such as factual distortions, misquoted statistics, and altered content from BBC reports, raising concerns about misinformation and public trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI‐powered summarisation systems were directly used to generate news summaries and were found to produce incorrect facts, altered quotations, and blurred fact/opinion distinctions. Even though this is framed as a broad study rather than a single isolated event, it documents realized harms—misinformation and distortion of trusted sources—caused by deployed AI systems. Therefore, it qualifies as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityRobustness & digital securitySafetyAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
ReputationalPublic interestPsychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

BBC: 'Significant Inaccuracies' in AI Generated News Reporting

2025-02-11
Newsmax
Why's our monitor labelling this an incident or hazard?
This article reports on the results of a study examining AI assistants’ performance and the systemic issue of inaccurate news summarization. No specific harm event is described, nor is a new incident of damage or rights violation documented. Instead, it provides contextual findings on the accuracy risks of AI news tools and urges cooperative responses—qualifying it as Complementary Information.
Thumbnail Image

'Significant inaccuracies' found in AI-generated news summaries: BBC

2025-02-11
The Hill
Why's our monitor labelling this an incident or hazard?
AI‐powered summarisation systems were directly used to generate news summaries and were found to produce incorrect facts, altered quotations, and blurred fact/opinion distinctions. Even though this is framed as a broad study rather than a single isolated event, it documents realized harms—misinformation and distortion of trusted sources—caused by deployed AI systems. Therefore, it qualifies as an AI Incident.
Thumbnail Image

「企業在玩火」 BBC:AI對新聞總結存在嚴重錯誤與扭曲 | 聯合新聞網

2025-02-11
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The article describes testing of AI summarization tools and highlights their significant errors and distortions, warning of the potential for those errors to cause real-world damage. No actual harm has yet been reported, but the demonstrated unreliability constitutes a credible risk. This fits the definition of an AI Hazard—an AI system’s use that could plausibly lead to an incident.
Thumbnail Image

AI chatbots distort and mislead when asked about current affairs, BBC finds

2025-02-11
aol.co.uk
Why's our monitor labelling this an incident or hazard?
The event describes multiple instances where AI systems in active use produced distorted and incorrect information about real‐world events, directly causing misinformation harm. This meets the definition of an AI Incident—realized harm (misinformation and erosion of public trust) resulting from AI outputs.
Thumbnail Image

AI chatbots distort and mislead when asked about current affairs, BBC finds

2025-02-11
the Guardian
Why's our monitor labelling this an incident or hazard?
The event describes real, realized harms—AI systems generating erroneous summaries, fabricating quotes, and misrepresenting facts about news topics—thereby causing misinformation (harm to communities and public discourse). This meets the definition of an AI Incident, as the AI systems’ use has directly led to demonstrable harm.
Thumbnail Image

Deborah Turness - AI Distortion is new threat to trusted information

2025-02-12
BBC
Why's our monitor labelling this an incident or hazard?
This piece is primarily about releasing a new study on AI ‘distortion’—a research finding with broad implications—and calls for industry and policy responses. While it documents real inaccuracies (hallucinations) by AI systems, it does not focus on a singular, concrete harmful incident or immediate safety hazard; rather it gives contextual evidence and seeks solutions. Therefore it fits the definition of Complementary Information.
Thumbnail Image

La moitié des réponses générées par ces IA seraient fausses ou biaisées

2025-02-13
Frandroid
Why's our monitor labelling this an incident or hazard?
This is a broad analysis and contextual report on AI model performance rather than a description of a specific harmful incident or a novel future hazard. It provides research findings and expert commentary to improve understanding of AI limitations, fitting the definition of Complementary Information.
Thumbnail Image

Leading AI Chatbots Like Copilot, ChatGPT, And Gemini Provide Misleading And Fake News Summary; Study Reveals

2025-02-14
Mashable India
Why's our monitor labelling this an incident or hazard?
The AI systems directly produced false or misleading summaries of real news articles, fabricating quotes, misrepresenting facts, and omitting critical context. This misinformation constitutes harm to news consumers and communities by spreading fake or distorted information. Because these harms are realized and directly attributable to the AI outputs, this qualifies as an AI Incident.
Thumbnail Image

AI chatbots 'unable to accurately summarise news'

2025-02-11
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI chatbots like ChatGPT, Copilot, Gemini, and Perplexity AI) and discusses their inaccurate outputs related to news summarization. Although no direct harm has been documented, the concern about misinformation and its potential to undermine trust and cause real-world harm fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harm (misinformation and societal disruption). The article focuses on the potential risks and calls for collaborative responses rather than reporting an actual incident of harm, so it is not an AI Incident or Complementary Information.
Thumbnail Image

AI is bad at news, BBC finds

2025-02-11
Android Police
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like Gemini, ChatGPT, Copilot, and Perplexity) generating news summaries that contain inaccuracies and falsehoods. However, the article does not describe any direct or indirect harm resulting from these inaccuracies, such as injury, rights violations, or significant community harm. Instead, it documents the known limitations and errors of AI-generated content and discusses mitigation strategies like human oversight. Therefore, this is complementary information providing context and assessment of AI system performance and responses, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

AI Chatbots Fail to Accurately Represent News, BBC Finds

2025-02-11
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) generating news content that contains significant inaccuracies and false claims. These inaccuracies have been verified by expert journalists and include factual errors and misquotations. The dissemination of such misinformation can harm communities by misleading the public on important issues like health, politics, and global conflicts. Therefore, the AI systems' use has directly led to harm as defined under the framework, qualifying this event as an AI Incident.
Thumbnail Image

AI chatbots distort and mislead when asked about current affairs, BBC finds

2025-02-11
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use has directly led to harm in the form of misinformation and distortion of facts, which harms communities by undermining trust in information and potentially violating the public's right to accurate information. The harm is realized and documented through the study's findings. Therefore, this qualifies as an AI Incident due to harm to communities through misinformation and distortion caused by AI system outputs.
Thumbnail Image

AI chatbots unable to accurately summarise news, BBC finds

2025-02-11
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) that generate inaccurate summaries of news content, which is a use of AI. Although no direct harm has been reported yet, the potential for AI-generated misinformation to cause real-world harm is clearly articulated by the BBC CEO. This fits the definition of an AI Hazard, as the development and use of these AI chatbots could plausibly lead to harms such as misinformation-induced social disruption or harm to communities. Since no actual harm has occurred yet, it is not an AI Incident. The article is not merely complementary information because it focuses on the risk and inaccuracies of AI outputs rather than responses or ecosystem updates.
Thumbnail Image

AI chatbots 'unable to accurately summarise news´

2025-02-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Copilot, Gemini) whose use in summarizing news has directly led to the dissemination of inaccurate and distorted information, a form of misinformation that harms communities by undermining trust in facts and verified news. This constitutes a violation of the right to access accurate information and can cause significant societal harm. Since the harm is occurring (not just potential), this qualifies as an AI Incident. The article focuses on the realized inaccuracies and their consequences rather than just potential risks or responses, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

AI Chatbots Have "Significant Inaccuracies" When Summarizing News, BBC Says; Top Exec Deborah Turness Says Tech Firms Are "Playing With Fire"

2025-02-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use in summarizing news content led to significant inaccuracies and distortions. These inaccuracies have already occurred and have been documented, indicating realized harm in the form of misinformation. The harm affects communities by spreading false information, which fits the definition of harm to communities under AI Incidents. The AI systems' outputs directly caused this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots unable to accurately summarise news, BBC finds

2025-02-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) that have been used to summarize news content and have produced outputs with significant inaccuracies and distortions. This has directly led to misinformation, which is a recognized harm to communities and public discourse. The BBC's concern about potential real-world harm from AI-distorted headlines further supports the presence of harm. Since the harm is already occurring (inaccurate summaries with factual errors), this is an AI Incident rather than a hazard or complementary information. The involvement of AI in generating the harmful outputs is explicit and central to the event.
Thumbnail Image

AI chatbots like ChatGPT, Gemini, Copilot providing inaccurate news summaries, BBC study finds

2025-02-12
MoneyControl
Why's our monitor labelling this an incident or hazard?
The AI systems involved are chatbots generating news summaries, which is an AI system task. The study found over 50% of responses had errors, including factual inaccuracies, which directly leads to misinformation harm to communities. This meets the definition of an AI Incident because the AI systems' use has directly led to harm (misinformation). The article does not only discuss potential harm but documents actual inaccuracies and their dissemination, thus it is not merely a hazard or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

BBC: 'Significant Inaccuracies' From AI News Reporting

2025-02-11
KBOI-AM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Copilot, Gemini, Perplexity) generating news summaries that contain significant inaccuracies and distortions. This misrepresentation can be considered harm to communities by spreading misinformation, which is a form of harm to the public's right to accurate information. Since the inaccuracies are already present and the harm (misinformation) is occurring, this qualifies as an AI Incident under the framework, specifically harm to communities through dissemination of inaccurate information.
Thumbnail Image

AI chatbots can't summarize news accurately, says BBC study

2025-02-12
Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like ChatGPT, Google Gemini) generating inaccurate news summaries. While no actual harm is reported, the BBC study and CEO's comments emphasize the plausible risk that such AI-generated misinformation could cause significant real-world harm. This fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm (misinformation affecting communities). There is no indication of a realized incident or legal/governance response focus, so it is not an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

AI chatbots are distorting news stories, BBC finds

2025-02-11
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) generating content that contains factual inaccuracies and distortions. These inaccuracies can lead to misinformation and harm to communities by spreading false or misleading information. Since the harm (distortion of news and misinformation) is occurring as a direct result of the AI systems' outputs, this qualifies as an AI Incident under the definition of harm to communities through misinformation dissemination.
Thumbnail Image

AI Chatbots Are Still Bad at Facts, Says BBC Study

2025-02-12
The How-To Geek
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Copilot, Gemini, Perplexity) used to generate news answers, which are found to be frequently inaccurate and misleading. This relates to harm to communities through misinformation. However, the article reports on a study's findings rather than a specific event where harm has directly or indirectly occurred. It discusses potential risks and systemic issues but does not document a concrete AI Incident or an imminent hazard event. The main focus is on the study's evaluation and calls for improved governance and transparency, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT and Google Gemini are terrible at summarizing news, according to a new study

2025-02-11
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating inaccurate news summaries that contain factual errors and misleading information, which can harm communities by spreading misinformation and eroding trust in news sources. The AI systems' use directly led to these harms through their flawed outputs. Therefore, this qualifies as an AI Incident due to realized harm to communities through misinformation dissemination.
Thumbnail Image

Microsoft Copilot struggles to discern facts from opinions -- posting distorted AI news summaries riddled with inaccuracies: "How long before an AI-distorted headline causes significant real-world harm?"

2025-02-11
Windows Central
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-powered chatbots) whose use in summarizing news has directly led to the dissemination of inaccurate and distorted information, which constitutes harm to communities by spreading misinformation. The inaccuracies and editorializing by these AI systems have already occurred and are documented by the BBC study, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing informational harm through distorted news summaries.
Thumbnail Image

This BBC Study Shows How Inaccurate AI News Summaries Actually Are

2025-02-11
Lifehacker
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (OpenAI, Google Gemini, Microsoft Copilot, Perplexity) generating news summaries, which is an AI system use case. However, the event reports on the presence of inaccuracies and potential for harm rather than an actual incident where harm occurred. The BBC CEO's statement about AI-distorted headlines causing real-world harm is a caution about plausible future harm, not a description of a realized harm event. Therefore, this qualifies as an AI Hazard, as the AI systems' inaccuracies could plausibly lead to misinformation-related harm, but no direct or indirect harm has been documented yet. The article also includes contextual information about legal disputes and the BBC's stance, but the main focus is on the potential risks identified by the study rather than a completed incident or complementary information about responses to a past incident.
Thumbnail Image

You Can Ask AI Chatbots to Summarize News Stories -- But They Will Be Wrong

2025-02-12
VICE
Why's our monitor labelling this an incident or hazard?
The AI chatbots involved are explicitly mentioned and are used to summarize news stories. Their outputs have directly led to misinformation and misrepresentation of facts, which constitutes harm to communities by spreading false or misleading information. This fits the definition of an AI Incident because the AI systems' use has directly caused harm through inaccurate information dissemination. The article describes realized harm rather than potential harm, so it is not a hazard or complementary information.
Thumbnail Image

AI Chatbots Have "Significant Inaccuracies" When Summarizing News, BBC Says; Top Exec Deborah Turness Says Tech Firms Are "Playing With Fire"

2025-02-11
Deadline
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI chatbots (AI systems) to summarize news content. The study found that over half of the AI-generated answers contained significant inaccuracies, some introducing false facts. These inaccuracies have already caused harm, such as false news alerts that misinformed the public. This constitutes harm to communities through misinformation and breaches the right to accurate information, a form of harm under the framework. The AI systems' outputs directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots distort the news, BBC finds - see what they get wrong

2025-02-12
ZDNet
Why's our monitor labelling this an incident or hazard?
The AI systems are explicitly involved as chatbots summarizing news content. Their outputs have directly led to the dissemination of inaccurate and distorted information, which harms the public by providing misleading or false news summaries. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to communities through misinformation. The article reports realized harm rather than just potential harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the event centers on AI system outputs causing harm.
Thumbnail Image

AI News Summaries Contain Significant Errors More Than Half the Time, BBC Study Finds

2025-02-14
mental_floss
Why's our monitor labelling this an incident or hazard?
The article details how AI systems (OpenAI, Google Gemini, Microsoft Copilot, Perplexity) generated news summaries with major factual errors and misrepresentations, which were reviewed and confirmed by expert journalists. The dissemination of false or misleading news content constitutes harm to communities and breaches the right to access accurate information, a fundamental right. The AI systems' outputs directly led to this harm, fulfilling the criteria for an AI Incident. The article also references prior related incidents, reinforcing the pattern of harm caused by AI-generated misinformation.
Thumbnail Image

BBC: Chatbots distort the facts about news

2025-02-12
Computerworld
Why's our monitor labelling this an incident or hazard?
The event involves generative AI systems (ChatGPT, Copilot, Gemini, Perplexity) providing inaccurate outputs, which is a malfunction or limitation in their use. This leads to misinformation, which can be considered harm to communities by spreading false or misleading information. Since the harm is realized (incorrect facts are being disseminated), this qualifies as an AI Incident. The AI system's malfunction in generating accurate information directly leads to harm in the form of misinformation.
Thumbnail Image

AI chatbots 'unable to accurately summarise news'

2025-02-11
AOL.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to the dissemination of inaccurate and distorted news summaries, a form of misinformation that harms communities by undermining trust in factual information. This constitutes a violation of the right to access accurate information, a fundamental right, and thus qualifies as harm under the AI Incident definition. The harm is realized as the chatbots have already produced significant factual errors and distortions. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI summaries turn real news into nonsense, BBC finds

2025-02-12
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI assistants) whose use in summarizing news has directly led to the dissemination of inaccurate and misleading information, a form of harm to communities and public trust. The article provides concrete examples of factual inaccuracies and misrepresentations caused by these AI systems. The harm is realized, not just potential, as the AI-generated content is already being consumed by users. This fits the definition of an AI Incident because the AI systems' outputs have directly led to harm through misinformation and distortion of facts.
Thumbnail Image

BBC conducts AI study, suggests chatbots are inaccurately summarizing news

2025-02-12
ReadWrite
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the performance issues of AI chatbots in summarizing news, noting significant inaccuracies and factual errors. While these inaccuracies could potentially lead to misinformation if widely disseminated, the article does not report any actual harm or incidents resulting from these AI outputs. The study is a controlled evaluation, and the BBC's CEO comments on the findings to highlight AI limitations. There is no indication that these AI outputs have caused injury, rights violations, or other harms, nor that they have plausibly led to such harms. The event thus provides valuable context and insight into AI system challenges but does not meet the criteria for an AI Incident or AI Hazard. It is not unrelated, as it involves AI systems and their outputs, but it is primarily informational and evaluative, fitting the definition of Complementary Information.
Thumbnail Image

Report says companies 'playing with fire' as AI chatbots fail when trying to summarize news - SiliconANGLE

2025-02-13
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The AI systems involved are major AI chatbots (ChatGPT, Copilot, Gemini, Perplexity) used to summarize news articles. Their outputs contained significant inaccuracies and factual errors, effectively generating misinformation. This misinformation can harm communities by distorting public knowledge and potentially influencing opinions or decisions based on falsehoods. The harm is realized as the AI systems have been used and their flawed outputs disseminated, meeting the criteria for an AI Incident due to direct harm caused by AI-generated misinformation. The event is not merely a potential risk or a complementary update but documents actual harm caused by AI use.
Thumbnail Image

Leading AI chatbots struggle to generate accurate news summaries

2025-02-12
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (leading AI chatbots) used to generate news summaries. The inaccuracies and hallucinations in the summaries represent a malfunction or limitation in the AI systems' outputs. While no direct harm has been reported, the CEO of BBC News expresses concern that AI-distorted headlines could cause significant real-world harm in the future. This indicates a plausible risk of harm to communities through misinformation dissemination. Therefore, this event fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident involving harm to communities through misinformation, but no incident has yet occurred.
Thumbnail Image

BBC Study Finds AI Chatbots Struggling With News Accuracy

2025-02-14
MediaNama
Why's our monitor labelling this an incident or hazard?
The AI chatbots involved are explicitly mentioned and their outputs have directly led to misinformation, which constitutes harm to communities (a form of harm under the framework). The lawsuit alleging harms such as encouragement of suicide and self-harm further supports the presence of realized harm caused by AI systems. Therefore, the event qualifies as an AI Incident due to the direct or indirect harm caused by the AI systems' inaccurate or harmful outputs. The discussion of regulatory responses and safety concerns provides context but does not overshadow the realized harms described.
Thumbnail Image

Study: Half of AI Answers About News Contain 'Significant Issues'

2025-02-11
Tech.co
Why's our monitor labelling this an incident or hazard?
The study involves generative AI chatbots, which are AI systems generating content based on input prompts. The inaccuracies and errors in their responses represent a form of harm to communities by spreading misinformation. Since the harm is occurring through the use of AI systems providing news answers with significant factual issues, this qualifies as an AI Incident due to realized harm from AI system outputs affecting information reliability and public understanding.
Thumbnail Image

BBC News finds that AI tools "distort" its journalism into "a confused cocktail" with many errors

2025-02-12
Nieman Lab
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI tools like ChatGPT, Copilot, Gemini, and Perplexity) used to generate content based on BBC articles. The AI systems' use has directly led to factual errors and distortions in information, which is a form of harm to communities by spreading misinformation and undermining trust in verified news. The harm is realized, not just potential, as evidenced by specific incorrect statements and misrepresentations cited. The BBC's concern about real-world harm from AI-distorted headlines further supports the classification as an AI Incident. This is not merely complementary information or a hazard, as the harm is occurring through the AI systems' outputs.
Thumbnail Image

BBC Research Reveals Major Issues with AI-Powered News Summaries - Research Snipers

2025-02-12
Research Snipers
Why's our monitor labelling this an incident or hazard?
The AI systems (generative chatbots) are explicitly involved in producing inaccurate and misleading news summaries, which have been verified by BBC journalists. The inaccuracies include fabricated quotes and factual errors, directly leading to misinformation harm, which affects communities and public trust. This fits the definition of an AI Incident as the AI system's use has directly led to harm (misinformation). The article also discusses the potential for future harm but the realized harm of misinformation is already present, making it an Incident rather than a Hazard. The event is not merely complementary information or unrelated news, as it documents concrete harm caused by AI outputs.
Thumbnail Image

Are AI Summarisers Missing the Point?

2025-02-11
UC Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Microsoft Copilot, Google's Gemini, ChatGPT, Perplexity AI) summarising news content and producing outputs with significant inaccuracies and distortions. These inaccuracies have been evaluated by expert journalists and include factual errors that misinform users. The harm here is the dissemination of false or misleading information, which affects communities and the public's right to accurate information, thus constituting harm to communities and a violation of informational rights. Since the harm is realized and directly linked to the AI systems' outputs, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Models Are Terrible At Relaying or Summarizing News

2025-02-12
WebProNews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT, Copilot, Gemini, Perplexity AI) used for news summarization and question answering, which fits the definition of AI systems. The study reveals significant errors and hallucinations in AI outputs, indicating malfunction or limitations in use. However, there is no mention of actual harm occurring to individuals, communities, infrastructure, or rights. The concerns are about potential risks and reliability, but no specific harm or plausible future harm event is described. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides valuable context and assessment of AI capabilities and risks, fitting the definition of Complementary Information.
Thumbnail Image

BBC releases damning research into AI news accuracy

2025-02-12
Computing
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Copilot, Gemini, Perplexity AI) generating news summaries that contain significant factual inaccuracies and distortions. These errors have already occurred and have led to misinformation, which is a form of harm to communities. The BBC's concern about AI-distorted headlines causing real-world harm further supports the classification as an AI Incident. The AI systems' use and malfunction (inaccurate outputs) have directly led to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study: Issues with over half the answers from AI assistants

2025-02-11
Advanced-television
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (ChatGPT, Copilot, Gemini, Perplexity) and their use in answering news questions, which is a use case of AI. The study identifies significant factual inaccuracies and misrepresentations, which could plausibly lead to misinformation and harm to communities if relied upon extensively. However, the article does not document any realized harm or incident resulting from these inaccuracies. Therefore, the event fits the definition of an AI Hazard, as it highlights plausible future harm from the use of AI assistants in news contexts. It is not Complementary Information because the article is not updating or responding to a prior incident but presenting new research findings about potential risks. It is not an AI Incident because no actual harm has been reported yet.
Thumbnail Image

AI chatbots unable to accurately summarise news, BBC study finds

2025-02-11
dpa International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots like ChatGPT, Copilot, Gemini) generating inaccurate news summaries, which is a direct use of AI. The inaccuracies and distortions in the AI-generated content have already been observed and documented, indicating realized harm in the form of misinformation that can undermine public trust and potentially cause real-world harm. The BBC CEO's warnings and Apple's decision to pause AI news summaries further confirm the recognition of actual harm. Therefore, this event meets the criteria for an AI Incident due to the direct link between AI system outputs and harm to communities through misinformation.
Thumbnail Image

BBC research finds 'significant issues' over accuracy of AI responses to news questions | News | Research live

2025-02-11
Research Live
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT, Copilot, Gemini, Perplexity) used to generate news-related answers, which were found to have significant accuracy and impartiality issues. While these issues could plausibly lead to harm such as misinformation or erosion of trust in news (harm to communities), the article does not document any actual harm or incident resulting from these AI outputs. The focus is on research findings and the implications for future use, making this a case of potential risk rather than realized harm. Therefore, it fits the definition of Complementary Information, as it provides important context and understanding about AI system performance and challenges without reporting a specific AI Incident or AI Hazard event.
Thumbnail Image

AI chatbots 'unable to accurately summarise news'

2025-02-11
Cambridge Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Copilot, Gemini) whose use in summarizing news has directly led to the dissemination of inaccurate and distorted information, a form of harm to communities and public trust. This misinformation constitutes a violation of the right to access accurate information and can cause significant societal harm. Since the harm is occurring (inaccurate news summaries being served to users), this qualifies as an AI Incident rather than a hazard or complementary information. The article focuses on the realized harm from AI chatbot outputs, not just potential risks or responses.
Thumbnail Image

BBC finds significant inaccuracies in over 30% of AI-produced news summaries

2025-02-13
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) generating news summaries that contain significant inaccuracies and misrepresentations, which have been verified by expert journalists. These inaccuracies can mislead audiences, constituting harm to communities by spreading misinformation. The AI systems' outputs directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a documented case of AI-generated misinformation causing harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Assistants d'IA : sur l'actualité, une réponse sur cinq contient des erreurs factuelles, selon une étude de la BBC

2025-02-11
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses multiple AI systems (ChatGPT, Copilot, Gemini, Perplexity, and others) that generate responses containing factual errors and misleading information about current events. This misinformation can harm communities by spreading false or distorted information, which is a recognized form of harm under the framework (harm to communities). The AI systems' outputs have directly led to this harm, as the errors are present in the AI-generated responses given to users. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, because the harm is realized and ongoing.
Thumbnail Image

Les assistants IA déforment-ils l'actualité ? Le rapport alarmant de la BBC

2025-02-13
Clubic.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Copilot, Google Gemini, Perplexity) used to summarize news content. The study found that over half of the AI-generated summaries contained significant errors, including factual inaccuracies and misattributions. These errors can lead to misinformation and harm to communities by distorting public understanding of news, which qualifies as harm to communities under the AI Incident definition. Since the harm is realized (errors and misinformation are present in AI outputs), this constitutes an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La moitié des réponses fournies par les IA génératives sur l'actualité ne sont pas fiables, selon une étude de la BBC

2025-02-11
Franceinfo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI models) and their use in answering news questions, with documented inaccuracies and hallucinations. While these errors could plausibly lead to misinformation harm or erosion of public trust, the article does not report any concrete harm or incident resulting from these AI outputs. The focus is on the study's findings and the implications for future use, making this a credible AI Hazard due to the plausible risk of harm from misinformation, but not an AI Incident since no direct harm is reported. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems and their outputs.
Thumbnail Image

Les journalistes sont sauvés : quand les IA réécrivent...

2025-02-14
Futura
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Apple's AI summarization, ChatGPT, Copilot, Gemini, Perplexity) used to generate news summaries. The AI's use has directly led to the dissemination of false and distorted information, which harms communities by spreading misinformation and undermining trust in factual news. This meets the criteria for an AI Incident because the harm (misinformation and its societal impact) is realized and directly linked to the AI systems' outputs. The event is not merely a potential risk or a complementary update but documents actual harm caused by AI misuse or malfunction in news summarization.
Thumbnail Image

Les synthèses d'actualités faites par l'IA introduisent souvent des distorsions, selon une enquête de la BBC qui estime que " les entreprises qui développent l'IA générative jouent avec le feu "

2025-02-13
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) used to summarize news content. The AI systems' outputs have directly led to the dissemination of false and misleading information, which harms public trust and misinforms readers, a clear harm to communities and a violation of informational rights. The harm is realized and ongoing, not merely potential. The article details specific examples of falsehoods generated by AI, the impact on news organizations, and the public's trust, fulfilling the criteria for an AI Incident. The event is not merely a hazard or complementary information, as the harm is actual and significant.
Thumbnail Image

La BBC alerte sur les erreurs des chatbots IA dans les résumés d'actualités - Siècle Digital

2025-02-13
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Gemini, Perplexity AI) generating news summaries with significant factual errors and distortions. These errors have already occurred and have the potential to misinform users, constituting harm to communities and possibly violating rights related to access to truthful information. Since the harm is realized and directly linked to the AI systems' outputs, this qualifies as an AI Incident. The article does not merely warn about potential future harm but documents actual inaccuracies and their consequences, which fits the definition of an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Citations inventées, erreurs factuelles : pourquoi vous ne devriez pas faire confiance à l'IA en matière d'actualité selon la BBC - L'Humanité

2025-02-11
L'Humanité
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini, Copilot, Perplexity) and their use in generating news-related content, which is an AI system involvement. However, the article focuses on the identification of inaccuracies and potential misinformation risks rather than a concrete AI Incident causing harm or a specific AI Hazard with plausible future harm. The harms discussed are potential or indirect, related to misinformation quality, but no direct or indirect harm (such as injury, rights violation, or disruption) is documented as having occurred. Therefore, this is best classified as Complementary Information, as it provides important context and critique about AI system performance and reliability in the news domain without reporting a new incident or hazard.
Thumbnail Image

ChatGPT est totalement nul pour résumer l'actualité, selon une étude BBC

2025-02-12
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots using large language models) whose use in summarizing news has led to factual inaccuracies and quality problems. While these issues imply a risk of harm to communities through misinformation, the article does not document actual realized harm or a specific incident causing damage. The concerns are about plausible future harm and the need for mitigation. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to harm, but no concrete incident is reported.
Thumbnail Image

Même connectés aux articles de la BBC, les IA se trompent plus de la moitié du temps - Next

2025-02-12
Next
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI assistants) whose use in answering questions based on BBC articles leads to the dissemination of factually incorrect and misleading information. While no specific incident of harm is reported, the potential for significant harm to public trust and information integrity is clearly articulated, indicating a credible risk of harm. Therefore, this event fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm through misinformation and distortion of facts. The article also includes elements of Complementary Information by discussing the BBC's response and call for collaboration, but the primary focus is on the identified risk of harm from AI-generated misinformation, making AI Hazard the most appropriate classification.
Thumbnail Image

0

2025-02-13
developpez.net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) used to summarize news content. The AI systems' outputs have directly led to harm by producing false or misleading information, which constitutes harm to communities and potentially violates the public's right to accurate information. The harm is realized, not just potential, as false claims about individuals and events have been disseminated. This fits the definition of an AI Incident because the AI systems' use has directly led to significant harm through misinformation and distortion of facts, impacting public trust and potentially causing reputational and social harm.
Thumbnail Image

Comment les principaux chatbots IA déforment l'actualité - ZDNET

2025-02-13
ZDNet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) explicitly mentioned as summarizing news articles. The use of these AI systems has directly led to harm in the form of misinformation, factual inaccuracies, and distorted quotations, which harm communities by misleading the public and undermining the integrity of information. The article documents realized harm, not just potential risk, fulfilling the criteria for an AI Incident. The harm includes violation of informational integrity and harm to communities through distorted news, which fits within the defined harms. Hence, the classification is AI Incident.
Thumbnail Image

H τεχνητή νοημοσύνη τα έκανε μαντάρα σε ειδησεογραφικό πείραμα του BBC | in.gr

2025-02-12
in.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used to generate news summaries, which is an AI system use case. The inaccuracies and distortions found represent a risk of misinformation that could plausibly lead to harm to communities or public trust if disseminated widely. However, the article does not document any realized harm or incident caused by these AI outputs; it reports on an experiment revealing potential issues and risks. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

Τα κορυφαία AI chatbots απέτυχαν παταγωδώς στις περιλήψεις ειδήσεων

2025-02-12
NewsIT
Why's our monitor labelling this an incident or hazard?
The AI systems (ChatGPT, Copilot, Gemini, Perplexity) are explicitly involved as AI chatbots performing summarization tasks. Their use has directly led to the dissemination of inaccurate and fabricated news summaries, which constitutes harm to communities by spreading misinformation. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused by AI system use.
Thumbnail Image

Έρευνα του BBC εντοπίζει "ζητήματα" στις απαντήσεις τεχνητής νοημοσύνης για ειδήσεις

2025-02-13
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use in providing news information has directly led to the dissemination of inaccurate and misleading content. This constitutes harm to communities by undermining the reliability of information and potentially violating the right to access truthful information. Since the harm is occurring through the AI systems' outputs, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of rights related to information accuracy.
Thumbnail Image

H τεχνητή νοημοσύνη τα έκανε μαντάρα σε ειδησεογραφικό πείραμα του BBC

2025-02-13
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Copilot, Gemini, Perplexity AI) generating news summaries with significant factual inaccuracies and distortions, which is a direct misuse of AI outputs leading to misinformation harm. The harm to communities through misinformation and misleading news is a recognized form of harm under the framework. The forced cessation of Apple's automatic news update service due to misleading AI-generated headlines further confirms realized harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πείραμα του BBC: Τα AI chatbots μπερδεύτηκαν με τις περιλήψεις ειδήσεων | Parallaxi Magazine

2025-02-12
Parallaxi Magazine
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose outputs contain significant inaccuracies and fabricated information, which could plausibly lead to harm such as misinformation and erosion of public trust in news (harm to communities). However, the article does not describe any realized harm or incident resulting from these AI outputs, only the potential risk. Therefore, this qualifies as an AI Hazard, as the AI's use in news summarization could plausibly lead to an AI Incident involving misinformation and harm to communities in the future.
Thumbnail Image

Αυτή είναι η νέα "Μέκκα" των fake news

2025-02-13
Karfitsa.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Copilot, Gemini, Perplexity) used to summarize news articles. The AI-generated summaries frequently contained false or fabricated information (hallucinations), which is a form of misinformation causing harm to communities by misleading the public. This is a direct harm caused by the AI systems' outputs. The article explicitly states the harm is occurring and raises concerns about potential significant real-world damage. Hence, this qualifies as an AI Incident due to realized harm from AI system use.
Thumbnail Image

Τα κορυφαία AI chatbots απέτυχαν παταγωδώς στις περιλήψεις ειδήσεων - Fibernews

2025-02-12
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The AI systems (ChatGPT, Copilot, Gemini, Perplexity) are explicitly involved as AI chatbots performing summarization tasks. Their use has directly resulted in the dissemination of inaccurate and fabricated news summaries, which constitutes harm to communities through misinformation and distortion of facts. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused by AI system outputs.
Thumbnail Image

"Θάλασσα" τα έκανε η τεχνητή νοημοσύνη σε πέιραμα | Alfavita

2025-02-14
Alfavita
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose use has directly led to the dissemination of inaccurate and misleading news content, a form of harm to communities and the information ecosystem. The article documents realized harm (not just potential) caused by AI-generated misinformation, fulfilling the criteria for an AI Incident. The involvement of AI in generating false or distorted news content is explicit, and the harm is clearly articulated as serious consequences for news reliability and public trust. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

BBC測試4款AI聊天機械人 ChatGPT等屢答錯時事問題 - 香港文匯網

2025-02-11
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to the dissemination of false and misleading information about current events. This misinformation can harm communities by distorting public understanding and trust in news. The AI systems' malfunction or limitations in providing accurate information have caused this harm. Hence, it meets the criteria for an AI Incident as the AI systems' outputs have directly caused harm through misinformation.
Thumbnail Image

BBC揭西方4主流AI工具錯漏多 - 20250212 - 國際

2025-02-11
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use has directly led to the dissemination of misleading and inaccurate information, which can be considered harm to communities by undermining public trust in factual information. The inaccuracies and misleading content are realized harms, not just potential. Therefore, this qualifies as an AI Incident due to the direct role of AI systems in causing harm through misinformation and erosion of trust in news.
Thumbnail Image

奇客Solidot | 研究发现 AI 的新闻摘要会经常性的扭曲事实

2025-02-14
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI models like ChatGPT, Copilot, Gemini, and others) producing news summaries that contain factual distortions and inaccuracies. While the current harm is primarily informational and the article does not report specific realized incidents of harm, it highlights a credible risk that such misinformation could lead to significant harm to communities or public understanding in the future. Therefore, this situation constitutes an AI Hazard, as the AI systems' outputs could plausibly lead to an AI Incident involving harm through misinformation and distortion of facts.
Thumbnail Image

BBC:四大AI聊天机器人无法准确理解新闻 | TRT 中文

2025-02-12
TRT
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI systems (chatbots) and their performance in a specific task (news comprehension). However, it does not report any realized harm or incident caused by these AI systems, nor does it indicate a plausible future harm resulting from their use. Instead, it provides an evaluation of their current limitations, which is informative but does not constitute an incident or hazard. Therefore, this is best classified as Complementary Information, as it contributes to understanding AI capabilities and limitations without describing a specific harm or risk event.
Thumbnail Image

AI代替人做新聞"靠譜"麼?

2025-02-13
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (ChatGPT, Copilot, Gemini, Perplexity) used in news content generation and their errors causing misinformation, which harms communities by spreading false or distorted news. The harm is realized, not just potential, as evidenced by specific examples of factual errors and fabricated content. This meets the definition of an AI Incident because the AI systems' use has directly led to harm to communities through misinformation dissemination. The article also includes calls for action to address these harms, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

BBC研究发现AI聊天机器人在新闻总结中存在显著准确性问题

2025-02-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI systems (Microsoft Copilot, OpenAI ChatGPT, Google Gemini, Perplexity) are explicitly involved in generating news summaries. The study documents that these AI-generated summaries contain significant factual inaccuracies and fabricated citations, which can mislead users and harm the public's right to accurate information, a form of harm to communities and violation of informational rights. Since the harm (misinformation and factual errors) is occurring as per the study, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to harm in the form of misinformation and erosion of trust in news content.
Thumbnail Image

AI扭曲事實|BBC測AI總結報道 兩成答案扭曲事實 - EJ Tech

2025-02-13
EJ Tech
Why's our monitor labelling this an incident or hazard?
The AI systems (ChatGPT, Perplexity, Microsoft Copilot, Google Gemini) are explicitly involved in generating summaries that contain factual distortions and misinformation, which constitutes harm to communities by spreading false or misleading information. This meets the criteria for an AI Incident under harm category (d) harm to communities. Furthermore, the copyright infringement ruling relates to AI training data use, constituting a violation of intellectual property rights (c). Since the article describes actual harms caused by AI system outputs and their development practices, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

BBC发现,用AI生成的新闻摘要问题太多

2025-02-15
煎蛋
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for news summarization and shows that their outputs have directly led to the dissemination of false or misleading information, which is a harm to communities and the information ecosystem. The harm is realized, not just potential, as evidenced by the false news headline about a suicide and the high rate of factual errors in AI-generated summaries. This meets the criteria for an AI Incident because the AI systems' use has directly led to harm (misinformation and erosion of trust). The event is not merely a hazard or complementary information, as the harm is ongoing and documented. Hence, the classification is AI Incident.
Thumbnail Image

La IA no sirve para estar informado: un estudio revela que los chatbots resumen mal las noticias

2025-02-12
20 minutos
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots using large language models) that generate news summaries. The study shows these AI systems produce significant inaccuracies and distortions, which could plausibly lead to harm such as misinformation spreading and erosion of public trust in verified information (harm to communities). Since the article does not report actual realized harm but warns about potential significant harm, this fits the definition of an AI Hazard rather than an AI Incident. The article also includes responses from OpenAI about efforts to improve accuracy, but the main focus is on the risk posed by inaccurate AI-generated news summaries.
Thumbnail Image

La BBC verifica las noticias en ChatGPT, Gemini, Copilot y Perplexity: la mitad, con 'problemas significativos'

2025-02-13
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI language models (AI systems) to generate news content. The study identifies that these AI systems have produced outputs with significant factual inaccuracies and editorial biases, which constitute harm to communities by spreading misinformation and potentially violating the right to accurate information. Since the harms are realized and directly linked to the AI systems' outputs, this qualifies as an AI Incident under the framework. The article does not merely warn of potential harm but documents actual inaccuracies and editorialization in AI-generated news content.
Thumbnail Image

Los chatbots de IA fallan en precisión al informar noticias, según estudio de la BBC

2025-02-13
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing inaccurate news information, which could plausibly lead to harm through misinformation. However, the article focuses on the study's findings and warnings rather than describing a concrete incident of harm or a near miss. Therefore, it fits the definition of Complementary Information, as it provides important context and assessment about AI's current limitations and risks in news reporting without reporting a specific AI Incident or Hazard.
Thumbnail Image

Más del 50% de resúmenes de noticias generadas con IA tienen fallos graves - PasionMóvil

2025-02-11
PasionMovil
Why's our monitor labelling this an incident or hazard?
The AI systems involved are chatbots (ChatGPT, Gemini, Copilot, Perplexity) that generate news summaries, clearly qualifying as AI systems. The study shows that over 50% of these summaries contain significant errors, including incorrect facts and altered quotes, which directly leads to misinformation harm to communities. This meets the criteria for an AI Incident because the AI systems' use has directly caused harm through dissemination of inaccurate information. The article does not merely discuss potential future harm or responses but documents realized harm from AI-generated content.
Thumbnail Image

Los chatbots de IA muestran "inexactitudes significativas" al resumir las noticias, dice BBC

2025-02-11
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use in summarizing news has directly led to the dissemination of inaccurate and distorted information. This constitutes harm to communities by spreading misinformation, which is a recognized form of harm under the framework. The inaccuracies are materialized and documented, not hypothetical, thus qualifying as an AI Incident rather than a hazard. The event does not focus on responses or governance but on the harm caused by the AI systems' outputs.