Grok AI Spreads Misinformation by Mistaking Jokes for Real News

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Grok, the AI system on X (formerly Twitter) owned by Elon Musk, repeatedly generated and promoted fake news stories by misinterpreting jokes as factual events, including false reports about a New York earthquake response and the solar eclipse. This led to the spread of misinformation among users, highlighting Grok's limitations.[AI generated]

Why's our monitor labelling this an incident or hazard?

Grok is an AI system generating content that has directly led to the spread of false news stories, which is a form of harm to communities by causing misinformation and potential public confusion or panic. The article details actual instances where Grok produced inaccurate and misleading news, fulfilling the criteria for an AI Incident due to realized harm from the AI's outputs. The harm is indirect but clear, as the AI's outputs misinform the public, which is a recognized form of harm under the framework.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
ReputationalPublic interestPsychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbotsOrganisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

Grok AI Creates Bizarre Fake News About the Solar Eclipse Thanks to Jokes on X

2024-04-08
Gizmodo
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating content that has directly led to the spread of false news stories, which is a form of harm to communities by causing misinformation and potential public confusion or panic. The article details actual instances where Grok produced inaccurate and misleading news, fulfilling the criteria for an AI Incident due to realized harm from the AI's outputs. The harm is indirect but clear, as the AI's outputs misinform the public, which is a recognized form of harm under the framework.
Thumbnail Image

Elon Musk's Grok AI Struggles Again, Turns Jokes About Solar Eclipse Into Fake News

2024-04-09
TimesNow
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used on a social media platform to generate content. The generation of fake news is a direct output of the AI system's use, leading to misinformation, which harms communities by spreading false narratives. Since the article states that Grok has generated fake news, this is a realized harm, qualifying as an AI Incident under the framework's definition of harm to communities.
Thumbnail Image

X's Grok AI Thinks Jokes Are Actual News Stories

2024-04-08
WebProNews
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to curate and present news content. Its failure to distinguish jokes from real news has directly led to the dissemination of false information, which constitutes harm to communities by misleading the public. Although no physical harm is reported, the spread of misinformation is a recognized form of harm under the framework. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation caused by the AI system's malfunction or misuse.