RFK Jr. Campaign's AI Chatbot Spreads Misinformation, Circumvents OpenAI Ban

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Robert F. Kennedy Jr.'s presidential campaign deployed an AI chatbot using OpenAI models via Microsoft's Azure service, bypassing OpenAI's political use ban. The chatbot disseminated vaccine misinformation and conspiracy theories, causing harm through disinformation before being taken offline following media scrutiny.[AI generated]

Why's our monitor labelling this an incident or hazard?

The chatbot is an AI system that was used by the campaign to provide information to supporters. Its outputs included affirmations of conspiracy theories and vaccine misinformation, as well as inaccurate voting information, which constitutes harm to communities through disinformation. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's deployment.[AI generated]
AI principles
AccountabilitySafetyHuman wellbeingTransparency & explainabilityDemocracy & human autonomyRobustness & digital security

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnologyIT infrastructure and hostingGovernment, security, and defence

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Robert F. Kennedy Jr.'s Microsoft-Powered Chatbot Just Disappeared

2024-03-03
Wired
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system that was used by the campaign to provide information to supporters. Its outputs included affirmations of conspiracy theories and vaccine misinformation, as well as inaccurate voting information, which constitutes harm to communities through disinformation. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's deployment.
Thumbnail Image

Robert F. Kennedy Jr.'s AI Chatbot Is Borked

2024-03-04
Gizmodo
Why's our monitor labelling this an incident or hazard?
The AI chatbot, based on OpenAI technology, was used in a political campaign to spread misinformation about vaccine safety, which can harm public health and communities. The chatbot's outputs affirmed false claims, directly contributing to misinformation harm. The event describes realized harm from the AI system's use, not just potential harm. The removal of the chatbot after media confrontation is a response but does not negate the incident. Hence, this is an AI Incident due to the direct role of the AI system in causing harm through misinformation dissemination.
Thumbnail Image

Another US presidential candidate's chatbot has been shut down

2024-03-04
Neowin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered chatbots by a political campaign, which qualifies as AI system involvement. However, there is no evidence or report of harm caused by the chatbot's use, such as misinformation, manipulation, or violation of rights. The shutdown is a response to policy concerns rather than a harm event. The article also references broader governance efforts (e.g., companies signing agreements to combat deepfake AI in elections), which aligns with complementary information about societal and governance responses to AI. Since the main focus is on the policy enforcement and the shutdown without harm, this fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

A descendant of Kennedy illegally used an OpenAI AI - Softonic

2024-03-04
Softonic
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot using OpenAI's ChatGPT and other LLMs) was used by a political campaign in apparent violation of OpenAI's usage policies. However, there is no indication that this use directly or indirectly caused any harm such as misinformation, manipulation, or rights violations. The event mainly concerns policy non-compliance and subsequent removal of the chatbot. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about governance and policy enforcement related to AI use in political campaigns.
Thumbnail Image

The Microsoft-Powered Chatbot Developed By Robert F. Kennedy Jr. Is Back Online - AI Next

2024-03-05
Latest News on AI, Healthcare & Energy updates in India
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as using large language models (GPT-3.5, GPT-4, Llama, Mistral) to generate responses. Its use has directly led to the dissemination of misinformation and conspiracy theories, which is a form of harm to communities and a violation of rights related to access to truthful information. The chatbot's outputs have been confirmed to propagate falsehoods about vaccines and conspiracy theories about the CIA and JFK's death. This meets the criteria for an AI Incident because the AI system's use has directly caused harm through misinformation spread, which is clearly articulated in the article.