AI-Generated Fake News Sites Spread Misinformation Globally

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hundreds of AI-powered websites are rapidly producing and spreading fake news stories, including a viral false report about the Israeli Prime Minister's psychiatrist. These generative AI tools enable large-scale, low-cost fabrication of misinformation, fueling propaganda and undermining public trust, especially during sensitive geopolitical events and election periods.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating false news content that is actively disseminated, causing misinformation and social harm during a sensitive geopolitical conflict. The AI-generated content has directly contributed to the spread of false information, which harms communities by distorting public understanding and potentially exacerbating tensions. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Can We Trust AI-Generated News? A Closer Look at the Israeli Prime Minister's Psychiatrist Story | Cryptopolitan

2024-03-11
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false news content that is actively disseminated, causing misinformation and social harm during a sensitive geopolitical conflict. The AI-generated content has directly contributed to the spread of false information, which harms communities by distorting public understanding and potentially exacerbating tensions. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation.
Thumbnail Image

Proliferating news Sites Spew AI-generated Fake Stories - UrduPoint

2024-03-11
UrduPoint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI tools to create fake news content that is being actively spread online. This misinformation constitutes harm to communities by undermining trust in information and potentially affecting democratic processes. Since the AI-generated content is currently being disseminated and causing harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-11
Legit.ng - Nigeria news.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake news content that is widely disseminated and believed, causing harm to communities through misinformation and political manipulation. The article documents realized harm from AI-generated false narratives influencing public perception and political debates, meeting the criteria for an AI Incident. The AI system's use is central to the harm, not merely a potential risk, so this is not a hazard or complementary information but an incident.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-11
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI tools are being used to create fake news stories that are widely disseminated and believed, causing misinformation and propaganda that harm communities and political processes. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d). The harm is realized and ongoing, not merely potential, as false narratives have influenced public discourse and policy debates. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-11
CNA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and disseminating false news content that has already caused harm by misleading the public and influencing social and political discourse. The AI-generated fake stories have been widely circulated, including on social media and television, demonstrating direct harm to communities through misinformation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities by spreading false information and propaganda.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-11
The Japan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake news stories being actively spread by hundreds of websites, which is a direct use of AI systems causing harm to communities through misinformation and propaganda. This meets the criteria for an AI Incident as the harm is realized and ongoing, not just a potential risk. The AI system's use in fabricating and disseminating false information is central to the harm described.
Thumbnail Image

Internet flooded with AI-powered 'news' sites that push propaganda, false stories: Report

2024-03-11
WION
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI tools are used to produce fake news content that is widely disseminated and believed, causing misinformation and propaganda harms. This constitutes harm to communities and societal trust, fulfilling the criteria for an AI Incident. The AI system's use in generating and spreading false information directly leads to these harms. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-11
Brattleboro Reformer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI tools are being used to create fake news stories that are widely disseminated and believed, causing misinformation and political manipulation. This constitutes harm to communities and societal disruption, fitting the definition of an AI Incident. The AI system's use is central to the harm, as it enables rapid, large-scale fabrication of false content that is difficult to distinguish from real news, directly contributing to the harm described.
Thumbnail Image

Proliferating 'news' sites spew AI-generated fake stories

2024-03-13
Robo Daily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI tools to create fake news content that is difficult to distinguish from authentic information. The widespread dissemination of this misinformation is causing harm to communities by fueling false narratives and political propaganda, which fits the definition of an AI Incident under harm category (d) - harm to communities. The AI systems' use in fabricating and spreading these stories is a direct contributing factor to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Des sites s'appuient sur l'IA pour générer de faux articles

2024-03-11
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create false news articles that have been published and spread widely, including false claims about political leaders and fabricated events. This dissemination of AI-generated misinformation has directly led to harm to communities by misleading the public and political discourse, fulfilling the criteria for an AI Incident under harm to communities. The AI system's use in generating and spreading false information is central to the harm described.
Thumbnail Image

De nombreux sites utilisent l'IA pour diffuser des fake news

2024-03-11
20minutes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems used to create fake news articles that have been widely disseminated and believed, causing misinformation and social harm. The AI systems' outputs directly contribute to the harm by producing false content that influences public perception and political debate. The harm is realized and ongoing, not merely potential, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des chercheurs alertent sur la génération d'informations par intelligence artificielle

2024-03-11
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating false news articles that have been published and widely disseminated, causing misinformation and social harm. The involvement of AI in generating these falsehoods is direct and central to the harm. The harms include misleading the public, influencing political discussions, and spreading false narratives, which qualify as harm to communities and violations of rights. Since the harm is occurring and the AI systems are the cause, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'IA utilisée par des sites pour générer de fausses informations, s'alarment des chercheurs

2024-03-11
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generative AI used to produce false news content. The use of these AI systems has directly led to harm to communities through the spread of misinformation and disinformation, which is a recognized form of harm under the framework. The article details actual incidents of false information generated and disseminated by AI, including political misinformation and fabricated news stories that have been widely shared and believed. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI systems' outputs.
Thumbnail Image

Intelligence artificielle : des centaines de sites s'appuient sur elle pour générer des infos, parfois sciemment fausses

2024-03-11
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create false or misleading news content that has been widely disseminated and believed by the public and political figures, causing harm to communities and violating rights related to truthful information. The AI systems are directly involved in generating the misinformation, which has led to real harm, meeting the criteria for an AI Incident. The harm includes misinformation affecting political discourse and public understanding, which is a form of harm to communities and a violation of rights. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des centaines de sites s'appuient sur l'IA pour générer des infos, parfois sciemment fausses

2024-03-11
Les Affaires
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate news content, some of which is deliberately false, leading to misinformation spreading among the public and decision-makers. This constitutes harm to communities through the dissemination of false information, fulfilling the criteria for an AI Incident. The AI's role is pivotal as it automates and scales the production of misleading content, directly causing the harm described. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des centaines de sites s'appuient sur l'IA pour générer des infos, parfois sciemment fausses

2024-03-11
Var-Matin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create false news articles that have been widely disseminated and believed, causing misinformation and harm to communities. The AI systems are directly involved in generating the harmful content, and the harm (disinformation, misleading the public, political manipulation) is realized and ongoing. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation.