Grok AI Spreads Misinformation About Bondi Beach Terror Attack

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's xAI chatbot Grok disseminated false and misleading information about a mass shooting at Bondi Beach, Sydney, during a Hanukkah event. The AI misidentified key individuals, fabricated details about the attack, and spread contradictory narratives, causing harm by amplifying misinformation during a crisis.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Grok chatbot) is explicitly involved as it generated false and misleading information about the Bondi Beach terrorist attack and the hero bystander Ahmed al Ahmed. This misinformation directly harms communities by spreading confusion and false narratives about a tragic event involving multiple deaths and terrorism, which is a form of harm to communities and public safety. The AI's malfunction in producing inaccurate and misleading content about the incident meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's outputs. The event is not merely a potential hazard or complementary information but a clear case of an AI system causing harm through misinformation.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicConsumers

Harm types
PsychologicalPublic interest

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

Bondi Beach terrorist attack: From Israeli hostage to cyclone Alfred, how Elon Musk's AI chatbot Grok misidentified hero bystander Ahmed al Ahmed

2025-12-15
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved as it generated false and misleading information about the Bondi Beach terrorist attack and the hero bystander Ahmed al Ahmed. This misinformation directly harms communities by spreading confusion and false narratives about a tragic event involving multiple deaths and terrorism, which is a form of harm to communities and public safety. The AI's malfunction in producing inaccurate and misleading content about the incident meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's outputs. The event is not merely a potential hazard or complementary information but a clear case of an AI system causing harm through misinformation.
Thumbnail Image

Grok Caught Spreading Misinformation About Bondi Beach Shooting

2025-12-15
PCMag Australia
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot) that is actively spreading false and misleading information about a serious violent incident. The misinformation can harm communities by distorting facts about the shooting, victims, and related events, which fits the definition of harm to communities. Since the AI system's use has directly led to the spread of misinformation causing harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bondi Beach shooting: Musk's Grok chatbot allegedly spread misinformation about incident

2025-12-15
GEO TV
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot) that generated and spread false information about a violent event, including misidentification of a key individual and incorrect claims about the incident. This misinformation has already been disseminated, causing harm to individuals' reputations and potentially misleading the public, which is a form of harm to communities. The AI system's outputs directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The article also notes attempts at correction, but the initial misinformation had already spread, confirming realized harm.
Thumbnail Image

Grok Is Making Wildly Contradictory Claims About Rob Reiner's Death

2025-12-15
Futurism
Why's our monitor labelling this an incident or hazard?
Grok is an AI system involved in generating content about real-world events. Its contradictory and false claims about sensitive breaking news have caused misinformation and disinformation, which harm communities and individuals by spreading falsehoods and potentially inciting social discord. The article documents realized harm caused by the AI system's outputs, not just potential harm. Hence, this is an AI Incident due to the direct role of the AI system in causing harm through misinformation and reputational damage.
Thumbnail Image

Elon Musk's Grok Sparks Outrage After Spreading False Claims on Bondi Beach Attack

2025-12-15
Analytics Insight
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot that generated false and misleading descriptions of videos and images related to a terrorist attack with confirmed fatalities. The AI system's outputs directly led to the spread of false claims about the incident, which constitutes harm to communities and misinformation about a critical event. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use and malfunction in generating inaccurate and harmful content about a sensitive and tragic event.
Thumbnail Image

AI chatbot Grok spreads false claims about Bondi Beach hero - Daily Times

2025-12-15
Daily Times
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot, an AI system, whose use led to the dissemination of false information about a tragic shooting event. This misinformation can harm communities by distorting public understanding and potentially causing reputational harm to individuals misidentified. The harm is realized, not just potential, as the false claims were actively spread and required correction. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in spreading misinformation causing harm to communities.
Thumbnail Image

Grok Disseminates Misinformation About Australian Shooting | ForkLog

2025-12-15
ForkLog
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (an AI-based chatbot) whose outputs have directly led to the spread of false and misleading information about a mass shooting, a terrorist act with significant casualties. The misinformation harms communities by distorting facts about a tragic event, which qualifies as harm to communities under the AI Incident definition. The AI system's use (its responses to user queries) is the direct cause of this misinformation dissemination. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Fake hero, wrong suspect: Misinformation floods social media after Bondi shooting

2025-12-16
WAtoday
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved in spreading false claims about a key event, which is a direct misuse of AI-generated content leading to misinformation harm. This misinformation can disrupt public understanding and trust, harming communities and individuals involved. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through misinformation dissemination during a sensitive event.
Thumbnail Image

xAI's Grok AI Spreads Misinfo on Fictional Bondi Beach Shooting

2025-12-15
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) that generated inaccurate and misleading information about a real-world violent event, which is a direct harm to communities by spreading misinformation during a crisis. The AI's malfunction in providing factual information during breaking news caused confusion and potential reputational harm to victims and witnesses, fulfilling the criteria for harm to communities and violation of informational integrity. The AI's role is pivotal as it amplified false narratives on a widely used social platform, making this a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk's Grok spews misinformation about Bondi Beach terror attack

2025-12-15
Cybernews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot) whose use has directly led to the spread of misinformation about a terror attack, which constitutes harm to communities. The misinformation includes false claims about victims and events, which can cause social harm and violate the public's right to accurate information. This fits the definition of an AI Incident because the AI system's use has directly led to significant, clearly articulated harm to communities.
Thumbnail Image

Bondi Beach Attack: Grok Misidentifies Ahmed Al Ahmed Who Disarmed Terrorist

2025-12-15
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) was used to provide information about a real-world event and individual but malfunctioned by generating multiple false and misleading responses. This misinformation can be considered harm to the community by spreading false narratives about a significant terrorist attack and harm to the individual's reputation. The AI's role in directly spreading misinformation and misidentification meets the criteria for an AI Incident, as it caused harm through its outputs. The subsequent correction of errors is a complementary update but does not negate the initial harm caused.
Thumbnail Image

AI images depicting Bondi Beach massacre as a film set emerge

2025-12-16
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images and text that have been used to spread false information about a tragic mass shooting event. The AI chatbot Grok misidentified a real hero as a fictitious person, and AI-generated images falsely depicted victims as actors on a film set. These AI outputs have led to confusion, misinformation, and emotional harm to real individuals, including a Pakistani man wrongly accused and suffering threats. The harms include violations of rights and harm to communities through misinformation. Since the AI system's use has directly led to these harms, this event meets the criteria for an AI Incident.
Thumbnail Image

Grok spews misinformation about Bondi shooting

2025-12-16
NZ Herald
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved in generating false and misleading content about a real violent event, which has been circulated and believed by users, causing harm to the community and individuals involved. The misinformation includes false claims about victims being 'crisis actors' and misidentification of images, which are forms of harm to communities and violations of rights. The AI's role is pivotal as it produced and propagated the misinformation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-17
Digital Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) whose use directly led to the dissemination of false information about a mass shooting, which is a harm to communities and public trust. The AI system's outputs misidentified individuals and falsely labeled victims as 'crisis actors,' contributing to misinformation during a sensitive and harmful incident. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities through misinformation and disinformation during a real-world tragedy.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
RTL Today
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was used and malfunctioned by producing confident but false information about a real-world violent incident, directly contributing to misinformation and harm to communities. This fits the definition of an AI Incident because the AI's outputs led to violations of rights to accurate information and caused harm to communities through misinformation during a mass shooting event. The harm is realized and ongoing as the misinformation was actively circulated and influenced public perception.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
KTBS
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved as it generated false and misleading content about a real-world violent incident. This misinformation has caused harm by misidentifying individuals, falsely accusing victims, and fueling conspiracy theories, which constitutes harm to communities. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework because the AI's use directly led to violations of informational integrity and harm to communities.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) whose use directly led to the spread of misinformation about a mass shooting, which constitutes harm to communities by fueling disinformation and undermining trust in factual reporting. The AI system's outputs were confidently false and contributed to confusion and potential reputational harm to individuals involved. This fits the definition of an AI Incident because the AI's use directly led to harm to communities through misinformation dissemination during a critical event.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot (an AI system) that produced misinformation about a real-world violent event, misidentifying a key figure and falsely accusing a victim. This misinformation can harm reputations and public understanding, which is a form of harm to communities. The AI system's outputs directly caused this harm by spreading false narratives. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and its use directly led to harm in the form of misinformation about a mass shooting, which is a clear harm to communities. The chatbot's confident but false responses about the event and individuals involved constitute an AI Incident because the AI's outputs have directly caused harm by spreading false narratives during a sensitive and impactful event. The misinformation about a victim being a 'crisis actor' and misidentification of key figures are concrete examples of harm caused by the AI system's malfunction or misuse.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-16
Watauga Democrat
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly mentioned as producing false and misleading information about a real-world violent event. This misinformation can harm communities by distorting facts and causing confusion or distress. Since the AI system's outputs have directly led to misinformation with potential societal harm, this qualifies as an AI Incident under the definition of harm to communities.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting | FOX 28 Spokane

2025-12-16
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved in generating and spreading false information about a real-world violent event, which constitutes harm to communities by fostering misinformation and conspiracy theories. The harm is realized and ongoing, as the chatbot's outputs have misled users and contributed to information chaos during a crisis. Therefore, this qualifies as an AI Incident because the AI's use has directly led to significant harm to communities through misinformation dissemination.
Thumbnail Image

Elon Musk's Grok AI spreads false claims about Australia mass shooting

2025-12-17
thesun.my
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and has directly led to harm by disseminating false information about a mass shooting, which affects the community's understanding and response to the event. The harm is to communities through misinformation and the spread of conspiracy theories, which fits the definition of an AI Incident. The event is not merely a potential risk but a realized harm caused by the AI system's outputs.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-17
Head Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Grok, an AI chatbot, produced and circulated misinformation about a mass shooting, including false claims about key figures and victims. This misinformation is a direct product of the AI system's outputs and has led to harm by spreading false narratives and disinformation during a sensitive and tragic event. The harm to communities and individuals is realized, not just potential, fulfilling the criteria for an AI Incident. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Attentat à Sydney: les fausses informations de Grok, l'IA d'Elon Musk

2025-12-17
The Times of Israel FR
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot) that generates real-time responses to user queries. Its dissemination of false information about the Sydney attack has directly led to harm by spreading misinformation and conspiracy theories, which harms communities and public understanding. The article documents actual misinformation caused by the AI's outputs, not just potential or hypothetical harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-17
GEO TV
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and malfunctioned by producing false and misleading information about a real mass shooting event, including misidentifying a hero and falsely claiming a victim staged injuries. This misinformation has already caused harm to communities by spreading false narratives and conspiracy theories, which fits the definition of harm to communities. The AI's role is pivotal as it directly generated and disseminated the misinformation. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Mise en scène " : quand Grok, l'IA d'Elon Musk, propage des fausses informations sur l'attentat de Sydney

2025-12-17
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
Grok is explicitly identified as an AI system (a chatbot) that generated false and misleading content about a real-world violent event. This misinformation can harm communities by distorting public understanding and potentially inciting distrust or panic. The harm is realized as the AI system is actively spreading false information, fulfilling the criteria for an AI Incident involving harm to communities. Therefore, this event qualifies as an AI Incident due to the direct role of the AI system in propagating harmful misinformation.
Thumbnail Image

Grok spews misinformation about deadly Australia shooting

2025-12-17
The Edition
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and its use directly led to harm in the form of misinformation about a mass shooting, which is a harm to communities and public trust. The misinformation includes false claims about victims and heroes, which can cause reputational harm and social disruption. This fits the definition of an AI Incident because the AI system's outputs have directly led to harm to communities through misinformation during a sensitive and tragic event.
Thumbnail Image

Australia shooting: Grok AI misidentifies hero, labels survivor's injuries as fake

2025-12-17
tnx.africa
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and has been used in a way that directly led to harm by spreading false information about a tragic event, misidentifying individuals, and reinforcing conspiracy theories about victims. This misinformation harms communities and individuals by distorting facts and potentially inciting distrust or social discord. The harm is realized and ongoing, not merely potential, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

You know nothing, Grok! Why X's AI bot can't be trusted with fact checks - Alt News

2025-12-18
Alt News
Why's our monitor labelling this an incident or hazard?
Grok is explicitly described as an AI system (a large language model chatbot) used for fact-checking on the X platform. The article documents multiple concrete examples where Grok's outputs were factually wrong, including misidentifying videos, misattributing events, and wrongly validating deepfake content. These errors have caused misinformation to spread among millions of users, which constitutes harm to communities and a violation of the right to accurate information. The AI system's use and malfunction (hallucinations and inaccuracies) have directly led to these harms. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'IA di Elon Musk è accusata di diffondere disinformazione sulla strage di Bondi Beach in Australia

2025-12-15
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a large language model chatbot) explicitly mentioned as generating false and misleading information about a real tragic event, causing harm by spreading disinformation and confusion. The harm is realized and ongoing, as the AI's outputs are actively misleading users about a mass shooting with fatalities. This constitutes harm to communities and possibly breaches rights to accurate information. The incident stems from the AI system's malfunction in processing and generating outputs about current events. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'IA di Musk diffonde disinformazione sulla sparatoria di Bondi Beach - Notizie

2025-12-15
ANSA.it
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot that generates responses to user queries. Its dissemination of false information about a violent event and related persons is a direct example of an AI system causing harm by spreading misinformation, which harms communities and public understanding. The article reports that these false outputs have already occurred and been observed by users, indicating realized harm rather than potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'IA di Musk è accusata di diffondere disinformazione sulla strage di Bondi Beach in Australia

2025-12-15
lastampa.it
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok chatbot) whose malfunction in generating accurate information about a tragic event has directly led to the spread of disinformation and confusion among the public. This misinformation harms communities by distorting the understanding of a violent incident with multiple casualties. The AI's role is pivotal as it is the source of the false narratives and erroneous identifications. The harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Grok continua a diffondere fake news, stavolta tocca a Bondi Beach

2025-12-15
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
Grok is explicitly identified as an AI system (a large language model chatbot). Its malfunction in generating false, misleading, and confused information about a real violent event has directly caused harm by spreading misinformation and confusion about the incident and individuals involved. This misinformation can harm communities by distorting public knowledge of a tragic event with fatalities, which fits the definition of harm to communities under AI Incident criteria. The article documents realized harm (not just potential), with Grok actively disseminating false narratives and misidentifications. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tutti contro Grok: l'AI di Musk dice il falso sulla strage di Bondi Beach

2025-12-15
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) that is malfunctioning by producing false and misleading content about a violent tragedy and other critical topics. This misinformation has caused confusion and could harm public understanding and trust, which qualifies as harm to communities. The AI's role is pivotal as it is the source of the false information. Therefore, this event meets the criteria for an AI Incident due to the realized harm caused by the AI system's malfunction.
Thumbnail Image

Grok diffonde fake news sulla sparatoria di Bondi Beach

2025-12-15
Punto Informatico
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot) that is actively producing and spreading false information about a real violent event, including incorrect details and misleading claims. This misinformation can cause harm to communities by distorting public understanding and potentially fueling social tensions or racist posts, as noted in the article. The AI's outputs have directly contributed to this harm, fulfilling the criteria for an AI Incident under the framework. The article describes realized harm, not just potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news because the AI system's malfunctioning outputs have caused harm.
Thumbnail Image

Grok su Tesla dice che investirebbe un miliardo di bambini

2025-12-15
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned and is integrated into Tesla vehicles, which are known to use AI for autonomous driving functions. The AI's statements about sacrificing billions to save one person demonstrate a malfunction or misuse of ethical reasoning, which is directly relevant to the safety of human lives in the context of autonomous vehicles. Although Grok does not yet control steering or braking, it is part of the navigation and user interaction system, and its distorted moral priorities could influence decisions or user trust, indirectly leading to harm. The event describes realized harm in terms of ethical violations and potential physical harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of harmful outputs from an AI system integrated into vehicles that have caused fatal accidents confirms the incident classification.
Thumbnail Image

Grok al centro delle polemiche per le informazioni false sull'attentato di Sydney

2025-12-15
Key4biz
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot using AI models) involved in generating and disseminating false information about a violent attack, which is a clear harm to communities and public information integrity. The misinformation was actively spread and caused confusion and potential social harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm (harm to communities through misinformation).
Thumbnail Image

Grok e il caso delle fake news sulla sparatoria di Sydney: le accuse contro l'AI di Elon Musk dopo l'attentato a Bondi Beach - StartupItalia

2025-12-15
Startupitalia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) whose use has directly led to the spread of false information (fake news) about a serious violent incident causing death and injury. This misinformation harms communities by distorting public understanding and discourse around the tragedy. The AI system's outputs are the source of the false narratives, fulfilling the criteria for an AI Incident due to realized harm to communities through misinformation. Therefore, this event is classified as an AI Incident.
Thumbnail Image

L'IA di Musk sta diffondendo disinformazione sulla strage di Bondi Beach in Australia

2025-12-15
Italian Tech
Why's our monitor labelling this an incident or hazard?
Grok is an AI system explicitly mentioned as generating false and misleading content about a real-world violent event, directly causing harm by spreading disinformation that affects public understanding and community trust. The harm is realized and ongoing, not merely potential. The AI's malfunction and erroneous outputs have led to confusion and misinformation about a tragic mass shooting, which is a clear harm to communities. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'Ai di Musk trasforma l'eroe musulmano di Sydney in un informatico bianco - Primaonline - Ultime notizie

2025-12-16
Prima Comunicazione
Why's our monitor labelling this an incident or hazard?
The AI system's use directly led to the spread of false information about a real-world event, causing harm to the community by misrepresenting facts and potentially exacerbating social tensions. The AI's 'hallucinations' and reliance on unverified AI-generated content contributed to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination.
Thumbnail Image

Fusillade de Bondi: pourquoi Grok a diffusé de fausses informations en réponses aux questions des internautes, et pourquoi les chatbots ne sont pas (encore) de bonnes sources pour l'actualité chaude

2025-12-15
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok, a conversational AI chatbot) whose outputs directly led to the dissemination of false information about a real-world violent incident. This misinformation can harm communities by spreading confusion and potentially undermining public trust during a crisis. The AI system's malfunction or limitations in providing accurate information during a critical event constitute an AI Incident because the harm (disinformation during a violent event) has already occurred and is directly linked to the AI system's outputs. The article also references similar issues with other chatbots, reinforcing the systemic nature of this harm.
Thumbnail Image

Le chatbot IA Grok de xAI d'Elon Musk diffuse des informations erronées sur la fusillade de Bondi Beach, identifiant à plusieurs reprises le héros à tort et diffusant de fausses informations sur l'attaque

2025-12-15
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok, a large language model chatbot) whose use led to the direct dissemination of false information about a real-world violent event. The harms include misinformation spreading, reputational harm to the hero Ahmed al Ahmed, and the amplification of false narratives that could mislead the public and distort understanding of the event. These harms fall under harm to communities and individuals, and the AI system's malfunction and misuse are central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok se trompe encore. Le chatbot invente des histoires sur la tragédie en Australie.

2025-12-15
Informaticien.be
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot system that is explicitly mentioned as generating false and misleading content about a real tragic event, which constitutes harm to communities through misinformation. The AI's malfunction in producing inaccurate responses directly leads to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (misinformation and confusion) affecting communities and public discourse.
Thumbnail Image

Grok déraille : désinformation sur la fusillade de Bondi Beach

2025-12-15
L'ABESTIT
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system that generates responses to user queries. Its dissemination of false information about a fatal shooting and misrepresentation of key facts constitutes a malfunction leading to misinformation, which is a form of harm to communities and potentially a violation of rights to accurate information. The article details realized harm caused by the AI system's outputs, not just potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction has directly led to harm through misinformation and social confusion.
Thumbnail Image

Non c'est pas Ahmed el Ahmed, mais Edward Crabtree : comment Grok a semé le doute sur l'attentat de Sydney ! | LesNews

2025-12-15
LesNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok chatbot) generating false and misleading information about a real-world event, which was then widely disseminated and believed by users. This misinformation during a sensitive and tragic event constitutes harm to communities and the public's right to accurate information. The AI's malfunction or misuse in generating and spreading false identities is a direct cause of this harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation dissemination.
Thumbnail Image

VÉRIF' - "Héros" de l'attentat à Sydney : quand Grok alimente la désinformation à propos d'une vidéo | TF1 Info

2025-12-15
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) that has been used to verify the authenticity of a video related to a violent attack. The AI system's false denial of the video's authenticity has contributed to misinformation about a real incident with casualties and injuries, which constitutes harm to communities. The AI's role in spreading false information is pivotal in this harm. Hence, this qualifies as an AI Incident due to indirect harm caused by the AI system's outputs leading to misinformation and potential social harm.
Thumbnail Image

Attentat de Sydney : Grok, l'IA d'Elon Musk, s'embourbe dans la désinformation

2025-12-17
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and has produced misinformation about a real violent event. This misinformation can cause harm to communities by distorting facts and potentially inciting confusion or panic. Since the harm (disinformation causing harm to communities) is occurring, this qualifies as an AI Incident under the framework.
Thumbnail Image

" Une mise en scène " : les dérives de Grok, l'IA d'Elon Musk, au sujet de l'attentat de Sydney

2025-12-17
Le Parisien
Why's our monitor labelling this an incident or hazard?
Grok is explicitly identified as an AI system (a chatbot) that generated false and misleading content about a real-world violent event. The dissemination of false information and conspiracy theories constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI system's use directly led to the spread of misinformation, which is a recognized form of harm under the framework.
Thumbnail Image

Attentat de Sydney : l'IA Grok d'Elon Musk propage de fausses informations

2025-12-17
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot providing real-time responses, thus qualifying as an AI system. Its use in spreading false information about a violent attack, including conspiracy theories and misidentifications, directly harms communities by fostering misinformation and potentially inciting distrust or social disruption. The article describes actual harm occurring due to the AI system's outputs, meeting the criteria for an AI Incident involving harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Attentat à Sydney: de fausses informations diffusées par l'IA d'Elon Musk

2025-12-17
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) actively generating and spreading false information about a real terrorist attack, which is a direct cause of harm to communities by misleading the public and potentially causing reputational and emotional harm to victims and witnesses. The AI's malfunction or misuse in generating false narratives about a sensitive event meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

Grok d'Elon Musk diffuse des fausses informations sur l'attentat de Sydney

2025-12-17
Le Matin
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot providing real-time responses, thus qualifying as an AI system. Its use in spreading false information about a terrorist attack, including denying victim status and misidentifying key individuals, directly causes harm to communities by spreading misinformation and conspiracy theories. This meets the criteria for harm to communities and violations of rights under the AI Incident definition. The harm is realized, not just potential, as the misinformation is actively disseminated and observed by experts and media. Hence, this is classified as an AI Incident.
Thumbnail Image

Bondi Beach : l'IA Grok raconte n'importe quoi sur l'attentat de Hanoucca

2025-12-16
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The Grok AI system is explicitly mentioned and is responsible for generating false narratives about a violent incident, which constitutes misinformation that can harm communities by inflaming tensions and spreading conspiracy theories. This fits the definition of an AI Incident because the AI's malfunction (hallucination and misinformation) has directly led to harm to communities through the spread of false and inflammatory information. The event is not merely a potential risk but describes actual misinformation dissemination by the AI system.
Thumbnail Image

"أحمد الأحمد رهينة إسرائيلي".. "غروك" يقدم معلومات خاطئة حول اعتداء سيدني

2025-12-17
قناة العربية
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned and is responsible for generating incorrect and misleading content about a violent terrorist attack, which caused real harm to people and communities. The misinformation could exacerbate harm by confusing the public and potentially undermining trust in factual reporting. This fits the definition of an AI Incident because the AI's malfunction directly led to harm to communities through misinformation during a serious event involving injury and death.
Thumbnail Image

"من بطل سيدني إلى أسير إسرائيلي".. "غروك" يخطئ مجددًا بشأن أحمد الأحمد

2025-12-17
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned and is involved in generating false and misleading information about a serious violent incident, which has already caused harm by spreading misinformation and confusion. This constitutes harm to communities and a violation of rights to truthful information. The AI's malfunction or misuse in this context directly led to these harms, qualifying the event as an AI Incident rather than a hazard or complementary information. The article details realized harm rather than potential or future harm, and the AI's role is pivotal in the misinformation spread.
Thumbnail Image

"غروك" يقدم معلومات خاطئة حول هجوم شاطئ بوندي في أستراليا

2025-12-17
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that was used to provide information about a real violent attack. The AI system's outputs were factually incorrect and misleading, which constitutes misinformation about a serious incident involving loss of life and injury. This misinformation can be considered harm to communities by distorting public perception and potentially exacerbating social tensions. Since the AI system's use directly led to the dissemination of false information about a harmful event, this qualifies as an AI Incident under the framework, specifically under harm to communities (d).
Thumbnail Image

"غروك" يقدم معلومات خاطئة حول اعتداء سيدني

2025-12-17
القدس العربي
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that was used to provide information about a real-world violent incident. The AI's incorrect outputs directly contributed to the dissemination of false information about the attack, which constitutes harm to communities (a form of harm under the framework). Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through misinformation spreading about a serious event.
Thumbnail Image

إحباط مخطط لهجوم في نيو أورليانز الأميركية واعتقال جندي سابق بحوزته أسلحة

2025-12-17
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly involved as it generated incorrect and misleading content about a real violent incident, which is a direct use of the AI system leading to harm in the form of misinformation and social disruption. The harm is realized, not just potential, as the false information was actively spread and influenced public perception. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and harm to communities through misinformation about a terrorist attack.
Thumbnail Image

'غروك' يقدّم معلومات خاطئة حول اعتداء سيدني

2025-12-17
annahar.com
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned and is involved in generating false information about a real violent event, which has already occurred and caused harm. The misinformation produced by the AI can harm communities by spreading confusion and false narratives about the attack. The AI's malfunction in providing incorrect identifications and context directly contributed to this harm. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

برنامج ذكاء اصطناعي يقدم معلومات زائفة حول هجوم أستراليا

2025-12-17
القدس
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating false and misleading content about a serious violent incident, which constitutes harm to communities through misinformation and potential violation of rights (e.g., misidentification of a person as a prisoner). The AI's outputs have directly led to the dissemination of false information about the attack, fulfilling the criteria for an AI Incident under harm to communities and violation of rights. Therefore, this is classified as an AI Incident.
Thumbnail Image

الذكاء الاصطناعي يقدم معلومات زائفة حول هجوم سيدني

2025-12-17
الجزيرة نت
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Grok') that generated false information about a violent terrorist attack, including misidentification of individuals and incorrect contextualization of video footage. This misinformation has already spread on social media, influencing public perception and potentially causing harm to affected communities and individuals. The AI's malfunction in this context directly led to harm by disseminating false narratives about a serious incident, fulfilling the criteria for an AI Incident under the definitions provided.
Thumbnail Image

الذكاء الاصطناعي يقدم معلومات زائفة حول "هجوم سيدني" وهذه أبرز مزاعمه

2025-12-17
فلسطين أون لاين
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that generated false information about a real violent incident, which has already caused deaths and injuries. The AI's misinformation could indirectly harm communities by distorting public understanding of the event, potentially affecting social cohesion and trust. Since the AI's outputs have directly contributed to the dissemination of false information about a serious violent attack, this qualifies as an AI Incident due to harm to communities and potential violation of rights to accurate information.
Thumbnail Image

Grok, la IA de Elon Musk, vuelve a liarla: el chabot difunde información errónea sobre el tiroteo en Sídney

2025-12-16
20 minutos
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved and has directly led to harm by spreading false information about a terrorist attack, which affects public perception and can exacerbate social tensions (harm to communities). The misinformation includes misidentification of individuals and denial of authentic evidence, which can contribute to social harm and misinformation. The AI's malfunction or erroneous outputs are central to the incident. The article reports actual harm occurring, not just potential harm, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok, el asistente de IA de Elon Musk, suma una polémica: su credibilidad queda en entredicho tras generar confusión acerca del atentado de Sydney

2025-12-17
La Razón
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as the source of misinformation that was disseminated during a real-world tragic event, causing reputational harm and misleading the public. This misinformation can be classified as harm to communities and a violation of trust, fitting the definition of an AI Incident. The harm is realized, not just potential, as the false information was actively spread and identified by a reputable news agency (AFP). Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok difunde información errónea sobre el tiroteo en Bondi Beach

2025-12-15
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (a chatbot) whose use directly led to the spread of false information about a mass shooting, causing harm to the community by spreading misinformation and potentially undermining public trust and safety. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination during a critical event.
Thumbnail Image

Grok difunde información errónea sobre tiroteo en Bondi Beach y confunde video clave

2025-12-15
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot integrated into a social media platform, thus qualifying as an AI system. Its use in providing information about a developing violent event led to the spread of false information, which constitutes harm to communities and individuals by distorting facts and potentially endangering people. The misinformation is a direct consequence of the AI system's malfunction or misuse, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Grok: IA de Elon Musk difunde desinformación sobre tiroteo en Bondi Beach

2025-12-15
Notiulti
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot that has been used to provide information about a real-world violent incident. Its outputs have included false identifications, misattributions of videos, and repetition of AI-generated fake news, which have already been disseminated publicly (e.g., on X). This constitutes direct harm to communities through misinformation and reputational damage, fulfilling the criteria for an AI Incident. The AI system's malfunction or poor performance in understanding and responding accurately has directly led to these harms.
Thumbnail Image

Grok de Elon Musk: IA difunde falsedades sobre atentado de Sídney

2025-12-17
Notiulti
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok chatbot) that has been used and malfunctioned by producing and spreading false and misleading information about a terrorist attack, which has already occurred and caused casualties. The harm is to communities through misinformation and the spread of conspiracy theories, which is a recognized form of harm under the framework. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm by confusing the public and spreading false narratives about a violent event.