Grok AI Spreads False Vandalism Accusation Against NBA Star Klay Thompson

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's Grok AI chatbot on X misinterpreted basketball slang and generated a false news story accusing NBA player Klay Thompson of criminal vandalism. The AI's error led to the spread of misinformation and reputational harm, highlighting risks of AI-generated content misrepresenting real individuals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Grok AI) produced a fabricated story that misinterpreted a metaphorical phrase as a literal event, resulting in false information about Klay Thompson vandalizing houses. While this is a clear malfunction of the AI's language comprehension, the article does not report any real-world harm or consequences resulting from this misinformation. Therefore, it qualifies as an AI Incident due to the AI's malfunction causing misinformation, which is a form of harm to communities through misleading content, even if no physical harm occurred.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Why Twitter's Grok AI bizarrely made Klay Thompson's 0-point game into a story about a 'brick-vandalism spree'

2024-04-17
For The Win
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI) produced a fabricated story that misinterpreted a metaphorical phrase as a literal event, resulting in false information about Klay Thompson vandalizing houses. While this is a clear malfunction of the AI's language comprehension, the article does not report any real-world harm or consequences resulting from this misinformation. Therefore, it qualifies as an AI Incident due to the AI's malfunction causing misinformation, which is a form of harm to communities through misleading content, even if no physical harm occurred.
Thumbnail Image

X's AI bot is so dumb it can't tell the difference between a bad game and vandalism

2024-04-17
engadget
Why's our monitor labelling this an incident or hazard?
The AI system Grok produced a fabricated story falsely accusing a public figure of criminal behavior, which was widely shared and caused public confusion. This is a clear example of an AI malfunction leading to misinformation, which is a recognized form of harm to communities and public trust. Although no physical harm or legal violation is reported, the AI's role in spreading false information that could damage reputations and mislead the public meets the criteria for an AI Incident. The article explicitly states the AI's erroneous output and its impact on public perception, fulfilling the definition of an AI Incident involving harm to communities through misinformation.
Thumbnail Image

Twitter users have been confusing Elon Musk's Grok AI with fake news and it's all rather amusing

2024-04-19
pcgamer
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to summarize breaking news on Twitter. It has produced false information by misinterpreting a joke as a factual event, which was then widely reposted, causing misinformation and potential reputational harm to the individual involved. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (through misinformation) and potentially to the reputation of a person, fulfilling the criteria for harm under the framework. The article also references similar issues with other AI chatbots, reinforcing the incident nature of the event.
Thumbnail Image

Elon Musk's Grok keeps making up fake news based on X users' jokes

2024-04-18
Ars Technica
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it generates news summaries and headlines based on platform posts. The false news about Klay Thompson is a direct output of Grok, leading to reputational harm and misinformation dissemination. The article details actual harm caused by the AI system's malfunction or misuse, including potential defamation lawsuits and public misinformation. The presence of disclaimers does not negate the harm caused. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm to individuals and communities through misinformation and defamation.
Thumbnail Image

Elon Musk's AI Publicly Accuses NBA Player of Criminal Vandalism

2024-04-19
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system (Grok chatbot) is explicitly involved and has produced false claims about a person, which is misinformation. Although no direct harm such as injury or legal violation has been reported, the AI's role in spreading false information publicly and the platform's failure to correct it could plausibly lead to harm to the individual's reputation and broader social harm. This fits the definition of an AI Hazard, as the AI's malfunction (misinterpretation of jokes as facts) could plausibly lead to an AI Incident if such misinformation causes real-world harm. There is no indication that harm has already occurred, so it is not an AI Incident. The event is more than just general AI news or complementary information because it involves a specific AI system's malfunction with potential harm.
Thumbnail Image

X's Grok AI Hallucinated Klay Thompson Vandalism: Not What "Shooting Bricks" Means

2024-04-18
Tech Times
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI chatbot) generated a false news story about Klay Thompson committing vandalism, which did not happen. This hallucination directly led to misinformation spreading on the social media platform, causing reputational harm to the individual and misleading the public. The AI's malfunction in generating false content is a direct cause of this harm. Therefore, this event meets the criteria for an AI Incident due to harm to communities through misinformation and reputational damage.
Thumbnail Image

X's AI chatbot sparks vandalism rumor after misinterpreting basketball slang

2024-04-18
NewsBytes
Why's our monitor labelling this an incident or hazard?
An AI system (xAI's Grok chatbot) generated and disseminated false information due to misinterpretation of slang, leading to harm to the community by spreading a false rumor that caused social disruption and confusion. This constitutes harm to communities through misinformation, directly linked to the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Warriors' Klay Thompson accused of 'brick-vandalism spree' by confused AI

2024-04-18
KCRA
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) generated a false news headline based on joke posts, causing misinformation. However, no actual vandalism or harm occurred, and the article clarifies the falsehood. The AI's malfunction is evident but did not lead to injury, rights violations, property harm, or other significant harms. The event is about the AI's error and its societal implications, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

They simply fooled Elon Musk's artificial intelligence!

2024-04-20
newsbeezer.com
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned and is responsible for generating misleading and false content about a real person, which constitutes harm to the individual's reputation, a form of harm to communities and potentially a violation of rights. The misinformation has already been spread, indicating realized harm rather than just potential. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in causing reputational harm through false information dissemination.
Thumbnail Image

The launch of Grok - artificial intelligence may be premature

2024-04-20
Valley Post
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating content that directly led to the spread of false and misleading information about a public figure, which is a form of harm to communities and individuals. The AI's malfunction in misunderstanding context (basketball term 'brick') caused the false news. The misinformation was amplified, causing real reputational harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and defamation).