AI Chatbots Spread Misinformation After Charlie Kirk Assassination

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Following the assassination of US activist Charlie Kirk in Utah, AI chatbots like Perplexity, Grok, and Google's AI disseminated false and contradictory information, including denying the shooting and misidentifying suspects. This fueled confusion and conspiracy theories online, highlighting the harm caused by AI-generated misinformation during breaking news events.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI chatbots are AI systems generating content in response to user queries. Their inaccurate or contradictory responses during a sensitive event have directly contributed to misinformation and confusion, which harms communities by disrupting the information environment. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
ReputationalPsychologicalPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

False AI 'fact-checks' stir online chaos after Kirk assassination - The Economic Times

2025-09-12
Economic Times
Why's our monitor labelling this an incident or hazard?
AI chatbots are AI systems generating content in response to user queries. Their inaccurate or contradictory responses during a sensitive event have directly contributed to misinformation and confusion, which harms communities by disrupting the information environment. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

False AI 'fact-checks' stir online chaos after Kirk assassination

2025-09-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots generating false and misleading information about a real violent event, which has caused confusion and misinformation on social media. The AI systems' confident but inaccurate responses have directly led to harm in the form of misinformation dissemination, which harms communities and undermines trust in information sources. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to communities through misinformation and social disruption.
Thumbnail Image

From 'Still Alive' To 'Satirical': AI Chatbots Fuel Confusion Following Charlie Kirk's Killing

2025-09-12
NDTV
Why's our monitor labelling this an incident or hazard?
The AI chatbots' use in disseminating false information about a violent killing directly led to misinformation and confusion among the public, which is a form of harm to communities. The AI systems' outputs were confidently incorrect, contributing to the spread of false narratives and undermining factual understanding during a critical event. This meets the criteria for an AI Incident because the AI systems' use directly led to harm in the form of misinformation and social disruption.
Thumbnail Image

False AI 'fact-checks' stir online chaos after Charlie Kirk assassination

2025-09-12
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like Perplexity and Grok) whose use has directly led to the spread of falsehoods about a violent event, causing harm to communities by fueling misinformation and social chaos. This meets the definition of an AI Incident because the AI systems' outputs have directly contributed to harm (harm to communities) through misinformation. The event is not merely a potential risk but a realized harm, as false information was actively disseminated and caused confusion and reputational damage to innocent individuals. Therefore, the classification is AI Incident.
Thumbnail Image

Charlie Kirk's death proves AI chatbots aren't built for breaking news

2025-09-11
Mashable
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI chatbots disseminated false or misleading information about a real-world violent event, contributing to misinformation and conspiracy theories. The AI systems' outputs directly influenced public perception and social discourse, which constitutes harm to communities. The involvement of AI in generating and spreading this misinformation is clear and direct, fulfilling the criteria for an AI Incident. Although the harm is non-physical, it is significant and clearly articulated, involving misinformation that disrupts societal understanding and trust during a sensitive event.
Thumbnail Image

Charlie Kirk's death proves AI chatbots aren't built for breaking news

2025-09-11
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) that generate and spread false or misleading information about a real-world violent incident. The AI systems' outputs have directly led to harm to communities by amplifying conspiracy theories and misinformation during a sensitive breaking news event. This fits the definition of an AI Incident because the AI systems' use and malfunction have directly caused harm (harm to communities through misinformation and social disruption). The article does not merely warn of potential harm or discuss responses; it documents actual misinformation spread by AI chatbots causing harm.
Thumbnail Image

AI chatbots spread false claims after Charlie Kirk assassination in Utah

2025-09-12
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots spreading false claims that have led to reputational harm to an innocent individual and increased misinformation in a volatile social context. The AI systems' outputs have directly contributed to harm by propagating falsehoods and misleading the public, which fits the definition of an AI Incident due to violations of rights and harm to communities. The involvement of AI in generating and spreading misinformation is clear and the harm is realized, not just potential.
Thumbnail Image

False AI 'fact-checks' stir online chaos after Kirk assassination

2025-09-11
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots generating false and misleading information about the assassination of Charlie Kirk, including denying the shooting and misidentifying a suspect. This misinformation has caused confusion and social disruption, which fits the definition of harm to communities. The AI systems' confident but inaccurate responses directly led to the spread of falsehoods, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI system outputs.
Thumbnail Image

False AI 'fact-checks' stir online chaos after Kirk assassination

2025-09-11
News on the Neck
Why's our monitor labelling this an incident or hazard?
The involvement of AI chatbots generating false or misleading information directly contributes to harm by fueling misinformation and social disruption, which constitutes harm to communities. The AI system's use in this context is leading to realized harm through misinformation dissemination. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Charlie Kirk Killing Sparks Wild Misinformation

2025-09-12
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and AI-generated videos) that are actively spreading false information about a real-world violent incident. This misinformation can harm communities by sowing confusion and distrust, which is a recognized form of harm under the framework. Since the misinformation is currently occurring and influencing public discourse, it constitutes an AI Incident due to harm to communities through misinformation. The article documents actual misinformation being spread, not just a potential risk, so it is not merely a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

False AI 'fact-checks' stir online chaos after Kirk assassination

2025-09-12
eNCAnews
Why's our monitor labelling this an incident or hazard?
The AI chatbots (Perplexity and Grok) are AI systems that generated confident but false statements about the assassination, including denying the shooting and misidentifying an innocent person as the shooter. This misinformation has already caused harm by confusing the public and damaging the reputation of an uninvolved individual. The event thus meets the criteria for an AI Incident due to harm to communities through misinformation and reputational harm, directly linked to the AI systems' outputs.
Thumbnail Image

AI Chatbots Amplify Misinfo After Charlie Kirk Assassination

2025-09-12
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI chatbots generating and spreading false narratives about a real violent incident, which has caused harm by misleading the public and fueling conspiracy theories. The AI systems' malfunction or misuse in real-time fact-checking has directly contributed to harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI systems are central to the event's negative impact.