Meta AI Misinformation on Trump Assassination Attempt

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's AI assistant incorrectly claimed that an assassination attempt on Donald Trump did not occur, despite being programmed to avoid the topic. This misinformation, attributed to AI 'hallucinations,' raised concerns about AI reliability and public trust. Meta is addressing the issue, acknowledging the potential harm of such misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

This case involves a clear AI system malfunction—an instance of “hallucination”—that directly led to wrongful content removal and thus a violation of users’ access to information. The AI’s error had real consequences (censorship claims, reputational harm), meeting the criteria for an AI Incident.[AI generated]
AI principles
Robustness & digital securitySafetyTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGeneral or personal use

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Donald Trump assassination attempt image: Meta calls it 'hallucination'. Blames it on AI. Really? Details here

2024-07-31
Economic Times
Why's our monitor labelling this an incident or hazard?
This case involves a clear AI system malfunction—an instance of “hallucination”—that directly led to wrongful content removal and thus a violation of users’ access to information. The AI’s error had real consequences (censorship claims, reputational harm), meeting the criteria for an AI Incident.
Thumbnail Image

Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.'

2024-07-31
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system (Meta’s internal chatbot and content‐labelling algorithms) malfunctioned (‘hallucinations’) and directly generated false claims about a real-world event, causing harm via misinformation and undermining public trust. This qualifies as an AI Incident because the AI’s erroneous outputs led to materialized harm.
Thumbnail Image

Trump says Mark Zuckerberg keeps calling him on the phone

2024-08-02
The Verge
Why's our monitor labelling this an incident or hazard?
This item is not reporting a new AI Incident or plausible hazard. It mainly describes a follow-up (Zuckerberg calling to apologize, company blog post attributing the error to hallucinations) to an earlier AI assistant mistake. As such, it provides complementary information on a prior AI-related error and its remediation rather than detailing a fresh incident or hazard.
Thumbnail Image

Meta AI called Trump shooting fake despite being programmed to ignore questions

2024-07-31
Ars Technica
Why's our monitor labelling this an incident or hazard?
An AI system (Meta AI chatbot) actively provided incorrect statements denying a real attempt on Trump's life. The AI’s hallucination directly led to the spread of misinformation, harming public understanding of a serious event. This constitutes an AI Incident because the chatbot’s behavior caused realized harm (misinformation) rather than merely posing a future risk.
Thumbnail Image

Meta Claims Its AI was 'Hallucinating' when It Covered Up Assassination Attempt on Donald Trump

2024-07-31
Breitbart
Why's our monitor labelling this an incident or hazard?
Meta’s AI assistant malfunctioned—hallucinating that the attempted assassination never occurred—and thus directly produced false information that impacts public understanding of a major news story. This is a realized harm to the information ecosystem (a form of societal/community harm) caused by the AI’s outputs. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Meta's AI apparently said the shooting at the Trump rally didn't happen. Here's what ChatGPT and Claude said.

2024-08-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) providing inaccurate or incomplete information about a real-world event, which is a known limitation (hallucinations and knowledge cutoffs). While these inaccuracies could potentially misinform users, the article does not report any actual harm or violation resulting from these AI outputs. The companies are actively addressing these issues, and disclaimers are provided. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about AI system behavior, limitations, and responses to prior issues.
Thumbnail Image

Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.'

2024-07-31
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI chatbot) that malfunctioned by generating false outputs about a real-world violent incident. This misinformation can harm communities by spreading false narratives and undermining trust in information sources, which aligns with harm to communities under the AI Incident definition. The AI's role is pivotal as the misinformation originated from its outputs, and the harm is realized through public confusion and political reactions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta blames hallucinations after its AI said Trump rally shooting didn't happen

2024-07-31
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI assistant) that produced false information about a real violent incident (attempted assassination of Donald Trump). This is a direct malfunction of the AI system leading to misinformation, which constitutes harm to communities by spreading false narratives about important events. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation dissemination caused by the AI's erroneous output.
Thumbnail Image

Meta explains why its AI claimed Trump's assassination attempt didn't happen

2024-07-31
engadget
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and malfunctioned by hallucinating, producing false information denying a real assassination attempt on Trump. This misinformation can harm communities by spreading false narratives and undermining trust in factual events. The harm is realized as the AI has already generated and disseminated these false claims. Therefore, this qualifies as an AI Incident due to the AI system's malfunction directly leading to harm through misinformation.
Thumbnail Image

'Trump shooting didn't happen': Meta's AI assistant says; company blames hallucinations for incorrect response

2024-07-31
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI assistant) that malfunctioned by providing false information about a real incident. This hallucination directly led to misinformation, which can harm public understanding and trust, thus constituting harm to communities. The AI's role in generating the false denial is pivotal. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation dissemination caused by the AI system's malfunction.
Thumbnail Image

Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.' | Business Insider India

2024-08-01
Business Insider India
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and malfunctioned by generating false claims denying a real assassination attempt, which is a serious harm to communities and public discourse. This misinformation constitutes harm under the framework as it misinforms the public about a violent event causing injury and death. The AI's hallucinations directly led to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harm (misinformation about a violent event) has already occurred and is linked to the AI's malfunction.
Thumbnail Image

Meta's AI Says Trump Wasn't Shot

2024-07-31
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's generative AI assistant) was directly involved in producing false and misleading outputs about a real-world violent event, which is a clear case of misinformation causing harm to communities by distorting public knowledge. The AI's malfunction (hallucinations) and failure to correctly distinguish doctored from real images further contributed to this harm. The event involves the AI system's use and malfunction leading to realized harm, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

Meta AI says Trump's attempted assassination didn't happen. Then, it blames hallucination

2024-07-31
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's Llama 3.1) generated false information about a significant political event, which is a direct instance of misinformation dissemination. This misinformation can harm communities by distorting public knowledge and potentially influencing political discourse. The event involves the AI's use and malfunction (hallucination) leading to this harm. Therefore, it meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm through misinformation.
Thumbnail Image

Meta's AI chatbot denies Trump assassination attempt, company blames 'hallucinations'

2024-07-31
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and has produced false information denying a real event, which is a form of misinformation causing harm to communities by distorting public knowledge. This meets the criteria for an AI Incident because the AI's outputs have directly led to harm through misinformation. The company's response to address the issue is noted but does not change the classification of the event as an incident.
Thumbnail Image

Meta faces AI accuracy issues as tech industry tackles hallucinations, deepfakes - SiliconANGLE

2024-07-31
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
Meta's AI chatbot, an AI system, produced false information about a significant event, which constitutes an AI Incident because the AI's use directly led to misinformation harm. The hallucination issue caused the chatbot to deny a real event, misleading users and potentially harming public understanding (harm to communities). The incorrect fact-checking label also represents misinformation harm. The article also includes complementary information about industry-wide responses to AI safety and accuracy issues, but the primary focus on Meta's chatbot inaccuracies and their impact qualifies this as an AI Incident.
Thumbnail Image

Meta's AI claimed the Trump assassination attempt didn't happen. The company is blaming 'hallucinations.'

2024-07-31
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
Meta's AI chatbot, an AI system, produced false statements denying a significant violent event, which is a direct misinformation harm affecting public understanding and political discourse. The harm is realized, not just potential, as the misinformation caused public backlash and political tension. The AI system's malfunction (hallucinations) is the direct cause of the misinformation. Hence, this event meets the criteria for an AI Incident due to realized harm to communities and violation of informational rights.
Thumbnail Image

Meta's AI apparently said the shooting at the Trump rally didn't happen. Here's what ChatGPT and Claude said.

2024-08-01
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The AI systems involved are generative AI chatbots (large language models) that were used to answer questions about a real-world violent event. Their inaccurate or misleading responses constitute misinformation, which is a form of harm to communities and public understanding. The misinformation was directly caused by the AI systems' limitations and programming decisions, leading to harm through false or misleading information about a significant event. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation dissemination caused by AI system outputs.
Thumbnail Image

Meta AI was asked about the Donald Trump shooting - it didn't end well

2024-07-31
The Star
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's Llama 3.1 chatbot) was involved and malfunctioned by providing inaccurate information about a real-world event, which is a breaking news incident involving harm (an assassination attempt). The AI's incorrect denial of the event could contribute to misinformation and harm to public understanding, but the article focuses on the AI's malfunction and Meta's response to it. Since the harm (misinformation) is indirect and the AI's malfunction is central, this qualifies as an AI Incident. The company is actively addressing the issue, but the incident of misinformation has already occurred.
Thumbnail Image

Meta AI Faces Scrutiny For Denying Trump Assassination Attempt - WinBuzzer

2024-07-31
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated false information about a significant event. This misinformation is a recognized problem (hallucination) in generative AI. However, the article does not report any actual harm resulting from this misinformation—no injury, no disruption, no rights violations, or other harms have been described as occurring due to the AI's error. The main focus is on the error itself and Meta's efforts to address it, which fits the definition of Complementary Information. The event does not meet the criteria for an AI Incident or AI Hazard because no harm has occurred or is plausibly imminent from the AI's hallucination in this context.
Thumbnail Image

Chatbots continue to fail as reliable sources, this time it's Meta's turn

2024-07-31
THE DECODER
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Meta AI and other large language models) and discusses their use and malfunction (hallucinations leading to false information). However, it does not report a specific AI Incident where harm has directly or indirectly occurred, nor does it describe a new plausible future harm event. Instead, it highlights systemic flaws and the risk of misinformation, which is a known issue across the industry. The focus is on explaining these issues and the challenges in addressing them, which fits the definition of Complementary Information as it provides supporting data and context about AI system impacts and responses without describing a new incident or hazard.
Thumbnail Image

Meta apologizes for its AI chatbot falsely claiming there was no Trump shooting - Internewscast Journal

2024-07-31
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI chatbot) that provided false information about a real and serious incident, which constitutes misinformation harming public knowledge and potentially societal trust. The AI's incorrect outputs are a malfunction leading to harm (misinformation about a violent event). Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous outputs.
Thumbnail Image

Meta's AI apparently said the shooting at the Trump rally didn't happen. Here's what ChatGPT and Claude said.

2024-08-01
DNyuz
Why's our monitor labelling this an incident or hazard?
The article details how AI chatbots gave false or misleading answers about a real-world violent event, which is a direct consequence of their design, training data limitations, and programming decisions. This misinformation can harm communities by spreading false narratives or confusion about important events. The AI systems' malfunction (hallucinations and refusal to answer) directly led to this harm. Although no physical injury or legal violation is reported, misinformation causing harm to communities is recognized as an AI Incident under the OECD framework. Hence, the event is classified as an AI Incident.
Thumbnail Image

Meta si scusa per gli errori dell'IA sull'attentato a Trump - Notizie - Ansa.it

2024-08-01
ANSA.it
Why's our monitor labelling this an incident or hazard?
Although it refers to AI inaccuracies that led to misinformation (hallucinations), the piece centers on Meta’s response—apologies, updates, and ongoing improvements—rather than reporting a new incident or forecasting future harm. This constitutes Complementary Information about company remediation.
Thumbnail Image

Cosa ci insegna il tentato omicidio di Trump sulle allucinazioni dell'intelligenza artificiale

2024-08-01
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The Meta AI chatbot is explicitly mentioned as the AI system involved. Its use led to the dissemination of false information denying a real violent event, which is a clear harm to communities and informational rights. The harm is realized, not just potential, as users received and believed incorrect answers. Meta's admission and efforts to fix the problem confirm the incident's materialization. This fits the definition of an AI Incident because the AI system's malfunction (hallucinations) directly caused harm by spreading misinformation about a serious event.
Thumbnail Image

Meta ha ammesso gli errori della sua IA. Aveva negato l'attentato a Donald Trump

2024-07-31
DDay.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Meta's AI and chatbot) that malfunctioned by providing inaccurate information and mislabeling images related to a real-world violent event. This misinformation can harm communities by spreading false narratives and undermining trust in information sources, which fits the definition of harm to communities and violation of rights. The AI's role is pivotal as the misinformation stems directly from its outputs. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anche l'AI di Meta ha le allucinazioni: il caso Trump

2024-07-31
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
An AI system (Meta AI) is explicitly involved and has malfunctioned by generating false information denying a real event. This misinformation has been widely disseminated, causing harm to communities by spreading false narratives and potentially interfering with political processes. The AI's role is pivotal in this harm, fulfilling the criteria for an AI Incident. The company's response is noted but does not negate the occurrence of harm.
Thumbnail Image

Anche l'AI di Meta ha "allucinazioni" sull'attentato a Trump

2024-07-31
Giornalettismo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI chatbot) producing inaccurate outputs about a real-world event, which is a malfunction in the AI's use. However, the article does not describe any realized harm such as injury, rights violations, or societal disruption caused by these hallucinations. The harm is potential, as misinformation could plausibly lead to harm in the future, but no such harm is reported or directly linked. Therefore, this situation constitutes an AI Hazard, reflecting a plausible risk of harm due to AI hallucinations, but not an AI Incident since no harm has materialized yet.
Thumbnail Image

Le allucinazioni di Meta AI: nega attentato a Trump

2024-07-31
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's generative chatbot) is explicitly involved and malfunctioned by producing false statements denying a real violent event. This misinformation can harm communities by spreading false narratives, which is a recognized form of harm under the framework. The article states the harm is occurring (not just potential), and the company is responding to it. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Za sve su krive halucinacije"

2024-07-31
B92
Why's our monitor labelling this an incident or hazard?
No actual harm occurred (e.g., injury, rights violation, infrastructure disruption). The post is an update on Meta’s efforts to mitigate AI hallucinations and reflects broader technical and governance challenges, fitting the definition of Complementary Information.
Thumbnail Image

INSTAGRAM ĆE OMOGUĆITI DA NAPRAVITE AI CHAT BOTOVE: Ljudi mogu da kreiraju AI verzije sebe i da tako komuniciraju sa drugima!

2024-07-31
kurir.rs
Why's our monitor labelling this an incident or hazard?
The event describes the launch of an AI system (AI Studio) enabling creation and use of AI chatbots for communication on Instagram. However, the article does not report any realized harm or incident caused by these AI chatbots. It mainly presents the new AI capability, its intended use, and Meta's precautions. There is no indication of direct or indirect harm occurring yet, nor a specific credible risk of imminent harm detailed. Therefore, this is a development in the AI ecosystem providing context and information about AI deployment and governance, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Kompanije moraju znati da objasne i dokumentuju kako njihovi AI sistemi donose odluke

2024-07-28
vijesti.me
Why's our monitor labelling this an incident or hazard?
The content is a broad discussion on AI regulation, ethics, transparency, and international collaboration without reporting any concrete event of harm or plausible imminent harm caused by AI systems. It highlights the need for regulation and responsible AI development but does not describe an AI Incident or AI Hazard. Therefore, it fits the category of Complementary Information as it provides context and insights into AI governance and societal responses rather than reporting a new incident or hazard.
Thumbnail Image

Veštačka inteligencija komplikuje pitanje plagijatorstva - kako bi naučnici trebalo da reaguju?

2024-07-31
Nauka Telegraf
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI, large language models) and discusses their use and misuse in academic writing, including plagiarism and copyright concerns, which are recognized harms under the framework (violation of intellectual property rights and academic ethics). However, the article does not report a specific incident where harm has occurred due to AI use, nor does it describe a concrete event where AI use has plausibly led to harm. Instead, it outlines ongoing debates, legal proceedings, policy changes, and the broader societal and academic responses to these challenges. This aligns with the definition of Complementary Information, as it provides supporting context and updates on AI-related issues without reporting a new AI Incident or AI Hazard.
Thumbnail Image

У Meta визнали "галюцинації" ШІ після замаху на Трампа

2024-07-31
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves an AI system malfunction—Meta AI’s hallucinations—directly producing false statements about a real violent incident and mislabeling genuine images. This misinformation constitutes an AI Incident because the AI’s erroneous outputs have already caused confusion and misinformation about the event.
Thumbnail Image

У Meta визнали "галюцинації" штучного інтелекту після замаху на Трампа

2024-07-31
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The article describes a malfunction of an AI system (Meta AI) that directly produced false statements about a real-world event and misidentified legitimate imagery, constituting misinformation harm. This is a realized incident of AI-generated misinformation, fitting the definition of an AI Incident.
Thumbnail Image

У Meta визнали "галюцинації" штучного інтелекту після замаху на Трампа

2024-07-31
InternetUA
Why's our monitor labelling this an incident or hazard?
An AI system (Meta AI chatbot) is explicitly involved and malfunctioned by providing false denials of a real-world violent event and mislabeling genuine images as fake. This misinformation and content misclassification constitute harm to communities by distorting public knowledge and potentially violating rights to accurate information. The harm has already occurred as users received incorrect information and authentic content was wrongly flagged. Therefore, this qualifies as an AI Incident due to the AI system's malfunction directly leading to harm.
Thumbnail Image

ШІ Meta заперечував замах на Трампа: компанія пояснила це галюцинаціями

2024-07-31
espreso.tv
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) was involved in the use phase, providing responses to user queries. Its malfunction manifested as hallucinations—incorrect or misleading answers denying or misrepresenting the assassination attempt on Trump. This misinformation relates directly to a real violent event causing injury, thus constituting harm to communities and individuals' understanding of the event. The AI's role in spreading false information about a violent attack is a direct or indirect cause of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

У Meta визнали "галюцинації" ШІ після замаху на Трампа

2024-07-31
@ www.BIN.com.ua Business Information Network
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Meta AI chatbot) that generated false information denying a real assassination attempt and mislabeling a genuine photo as fake. This misinformation can harm public understanding and trust, which is a form of harm to communities. The AI's malfunction (hallucinations) is the direct cause of this harm. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

"Галюцинації". ШІ від Цукерберга заперечує, що на Трампа скоїли замах

2024-08-01
techno.nv.ua
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's chatbot) is explicitly involved and malfunctioned by generating false denials and refusing to answer questions about a real violent incident. This misinformation can harm communities by spreading confusion and undermining trust in information, which is a recognized form of harm under the framework. The mislabeling of a real image as misinformation further compounds this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction and its impact on public understanding and discourse.
Thumbnail Image

Meta AI: Покушението над Доналд Тръмп не се е случило

2024-07-31
It.dir.bg
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI) is explicitly involved as it generated false and misleading content denying a real event. This is a malfunction or limitation of the AI system (hallucination). While the article does not document actual harm caused by this misinformation, the potential for harm to communities through misinformation is clear and recognized as an industry-wide challenge. Since the harm is not confirmed as realized but the risk is credible and ongoing, this event fits best as Complementary Information describing the AI system's limitations and the company's response efforts rather than a confirmed AI Incident or a mere hazard.
Thumbnail Image

Meta премахва AI чатботове, използващи лицата на известни личности

2024-08-02
Life.dir.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots based on generative AI language models, confirming AI system involvement. However, it does not report any realized harm or incident resulting from these chatbots, such as misinformation, rights violations, or other harms. The discontinuation is a corporate decision in response to user concerns and policy issues, which is a governance or strategic response rather than an incident or hazard. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI system deployment and company strategy without describing a new AI Incident or AI Hazard.
Thumbnail Image

Meta AI: Покушението над Доналд Тръмп не се е случило

2024-07-31
dnesplus.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Meta AI) that generated false and manipulative content denying a real-world violent event. This misinformation constitutes harm to communities by spreading false narratives and undermining public trust. The AI's malfunction (hallucination) is the direct cause of this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's outputs have directly led to harm through misinformation dissemination.
Thumbnail Image

Meta-Chatbot leistet sich peinlichen Fauxpas

2024-07-31
computerbild.de
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly involved and malfunctioned by producing false statements denying a real violent attack, which is a direct misinformation harm to communities. The harm is realized as the chatbot's false claims have caused public controversy and criticism, indicating the misinformation has been disseminated. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to communities through misinformation.
Thumbnail Image

KI-Chatbot bestritt Trump-Attentat - Meta gibt Halluzinationen die Schuld

2024-07-31
Spiegel Online
Why's our monitor labelling this an incident or hazard?
The AI chatbot's hallucination led to the false denial of a significant event, which is misinformation that harms public understanding and trust. The AI system's malfunction directly caused this harm, fulfilling the criteria for an AI Incident. Although Meta took corrective action, the harm had already occurred. The event involves an AI system, its malfunction, and resulting harm through misinformation, fitting the definition of an AI Incident.
Thumbnail Image

Der Tag: Metas Chatbot bestritt Trump-Attentat

2024-07-31
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the AI chatbot's false claims (hallucinations) about a real-world event, which is a malfunction in the AI's output. While misinformation can harm communities or public discourse, the article does not state that such harm has materialized. The event highlights a plausible risk of harm from AI-generated misinformation but does not document an incident where harm occurred. Hence, it fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to harm (e.g., spreading false information), but no harm is confirmed.
Thumbnail Image

Meta in Erklärungsnot: Chatbot bestritt Trump-Attentat

2024-07-31
newsORF.at
Why's our monitor labelling this an incident or hazard?
The AI chatbot is explicitly mentioned and is an AI system. Its malfunction (hallucinations) caused it to deny a real violent attack, which is a serious misinformation harm affecting public understanding of a harmful event. This misinformation can harm communities by distorting facts about violence and public safety. The harm is realized as the chatbot actively produced false information. Hence, this qualifies as an AI Incident due to the AI system's malfunction leading to harm to communities through misinformation about a violent attack.
Thumbnail Image

"Fiktives Ereignis": Meta AI weiß nichts von Schüssen auf Trump

2024-08-02
heise online
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly involved and has produced false information about a real violent event, which is a direct misuse or malfunction (hallucination) of the AI system. The misinformation about the assassination attempt on a public figure and the erroneous labeling of a real photo with a fact-check label have caused harm by spreading false narratives and confusing the public, which is harm to communities. The event involves realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

KI-Chatbot von Meta bestritt Trump-Attentat

2024-07-31
der Standard
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly involved as it generated false claims denying a real-world event, which constitutes misinformation. This misinformation can harm communities by spreading false narratives, thus meeting the criteria for harm to communities under AI Incident definition. The event describes realized harm (misinformation dissemination) caused directly by the AI system's outputs, not just a potential risk. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Kritik an Facebook-Konzern: Meta-Chatbot bestreitet Attentat auf Donald Trump

2024-07-31
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The Meta AI chatbot is explicitly an AI system involved in generating responses. Its denial of a real, harmful event (the assassination attempt) constitutes misinformation, which harms communities by spreading false narratives. The AI's hallucinations and erroneous fact-check labeling demonstrate malfunction. The harm is realized as the misinformation has been publicly disseminated, causing reputational and informational harm. Thus, the event meets the criteria for an AI Incident, as the AI system's malfunction directly led to harm in the form of misinformation and public confusion.
Thumbnail Image

Chatbot streut Falschinformationen nach Trump-Attentat - Meta reagiert

2024-07-31
Merkur.de
Why's our monitor labelling this an incident or hazard?
The Meta AI chatbot, an AI system, produced false information denying a violent attack that actually occurred, and misapplied fact-check labels to images, which misinforms the public. This misinformation can harm communities by distorting understanding of a serious event, fulfilling the harm criteria (d) for AI Incidents. The event involves the AI system's malfunction (hallucination) directly causing the harm. The article reports realized harm (misinformation spread and public criticism), not just potential harm, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta: Chatbot bestreitet Attentat auf Donald Trump

2024-07-31
Berliner Zeitung
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and malfunctioned by denying a real violent attack, which caused misinformation and confusion. The attack itself caused physical harm, and the AI's false claims about it contribute to harm to communities by distorting public understanding of the event. The automated mislabeling of the photo further exemplifies AI malfunction causing misinformation. These factors meet the criteria for an AI Incident, as the AI system's malfunction directly led to harm through misinformation about a significant violent event.
Thumbnail Image

Donald Trump: KI-Chatbot von Meta leugnete Attentat

2024-07-31
manager magazin
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly mentioned and was used to answer questions about a real-world event. Its malfunction—providing false denials of the assassination attempt—constitutes misinformation that can harm public understanding and trust, thus harming communities. The harm is realized as the chatbot actively spread false claims. Therefore, this qualifies as an AI Incident due to the AI system's malfunction directly leading to harm through misinformation dissemination.
Thumbnail Image

Metas Chatbot bestritt Trump-Attentat

2024-07-31
wallstreet:online
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's chatbot and automated fact-check labeling system) is explicitly involved. The chatbot's false denial of a real violent event and the incorrect fact-check label on a genuine photo have directly led to misinformation, which harms communities by spreading false narratives and undermining factual discourse. This fits the definition of an AI Incident because the AI's malfunction (hallucinations and mislabeling) has directly led to harm to communities through misinformation and confusion. Therefore, the event is classified as an AI Incident.
Thumbnail Image

KI-Chatbot des Facebook-Konzerns: Metas Chatbot bestritt Trump-Attentat

2024-07-31
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and has malfunctioned by generating false claims denying a real violent incident, which is a form of misinformation causing harm to communities and public discourse. The chatbot's behavior has led to criticism and reputational damage to Meta, and the misinformation could indirectly harm societal trust and political discourse. The mislabeling of the photo by automated AI systems further demonstrates malfunction. Since the harms are realized and directly linked to the AI system's outputs, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Metas Chatbot bestritt Trump-Attentat

2024-07-31
finanzen.at
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and has produced false information denying a real violent event, which is a direct harm in terms of misinformation affecting communities. The misapplication of fact-check labels further compounds misinformation. The harm is realized, not just potential, as the chatbot actively spread falsehoods and the labeling error misled users. This fits the definition of an AI Incident because the AI system's outputs directly led to harm to communities through misinformation and confusion about a significant event.
Thumbnail Image

Meta-Aktie: Meta-Chatbot hat Attentat gegen Trump geleugnet

2024-07-31
finanzen.at
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly involved and malfunctioned by generating false denials about a real violent event, which is a direct harm to communities through misinformation. The mistaken fact-check labeling also reflects AI malfunction affecting information integrity. These harms are realized, not just potential, meeting the criteria for an AI Incident. The event is not merely a product update or general news but details specific harms caused by the AI's outputs.
Thumbnail Image

Meta-KI leugnet Trump-Attentat: Unternehmen rechtfertigt sich

2024-07-31
futurezone.at
Why's our monitor labelling this an incident or hazard?
The AI system (Meta AI chatbot) is explicitly involved and malfunctioned by generating false statements denying a real event and mislabeling an original photo as manipulated. These malfunctions directly led to misinformation, which harms communities by spreading falsehoods and undermining trust in information. The involvement of AI in content moderation and generation is clear, and the harm is realized, not just potential. Hence, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Donald Trump News: Metas Chatbot bestritt Trump-Attentat

2024-07-31
News.de
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and malfunctioned by generating false statements denying a real violent incident, which constitutes misinformation harming communities by distorting facts about a politically sensitive event. This meets the criteria for an AI Incident because the AI's malfunction directly led to harm in the form of misinformation and potential social harm. The erroneous fact-check labeling further supports the malfunction aspect. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Metas Chatbot bestritt Trump-Attentat

2024-07-31
Nachrichten der Ortenau - Offenburger Tageblatt
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's chatbot) is explicitly involved and malfunctioned by generating false statements denying a real violent attack, which caused harm by spreading misinformation on a politically sensitive event involving injury and death. The mislabeling of the photo by automated AI systems further demonstrates malfunction leading to misinformation. These harms fall under harm to communities and violation of informational integrity, meeting the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm caused by AI malfunction.
Thumbnail Image

Warum Meta AI das Attentat auf Donald Trump verheimlichte

2024-07-31
Trending Topics
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta AI using Llama 3.1) whose use has directly led to misinformation risks and the withholding of information about a significant political event. The AI's malfunction (hallucinations) and content moderation decisions have impacted the dissemination of accurate information, which can be considered harm to communities through misinformation and erosion of trust. The mislabeling of manipulated photos also relates to misinformation harm. Since the AI system's use has directly contributed to these harms, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Chatbot streut Falschinformationen nach Trump-Attentat - Meta reagiert

2024-07-31
sauerlandkurier.de
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Meta's AI chatbot) whose malfunction (hallucinations) directly led to the dissemination of false information about a significant real-world event, which constitutes harm to communities by spreading misinformation. The false denial of a violent attack and mislabeling of images can undermine public trust and distort understanding of important events, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as misinformation is actively spread and criticized. Therefore, this is classified as an AI Incident.