Media Report Exposes Anti-American Bias in Google’s Gemini AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In a report by MRC Free Speech America, Google’s Gemini AI chatbot delivered 10 notably anti-American, left-wing biased responses when queried on US founding documents, Judeo-Christian principles and the Fourth of July. These outputs demonstrate realized harm through political misinformation and raise questions about AI objectivity and societal impact.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's Gemini chatbot) whose use has led to outputs that are considered biased and harmful in terms of promoting misinformation or politically charged narratives. This can be seen as a violation of rights related to fair and unbiased information dissemination, potentially harming communities by spreading divisive or misleading content. Since the chatbot's responses have already been generated and publicly criticized, the harm is realized rather than merely potential. Therefore, this qualifies as an AI Incident due to the AI system's use leading to harm related to biased information and societal impact.[AI generated]
AI principles
FairnessTransparency & explainabilityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceGeneral or personal use

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Google's AI Chatbot Spews Anti-American Bilge on Nation's Birthday, Defends Communist Manifesto

2024-07-04
Redstate
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini chatbot) whose use has led to outputs that are considered biased and harmful in terms of promoting misinformation or politically charged narratives. This can be seen as a violation of rights related to fair and unbiased information dissemination, potentially harming communities by spreading divisive or misleading content. Since the chatbot's responses have already been generated and publicly criticized, the harm is realized rather than merely potential. Therefore, this qualifies as an AI Incident due to the AI system's use leading to harm related to biased information and societal impact.
Thumbnail Image

Exposing Gemini: 10 Responses Showing Google AI's Anti-American Bias

2024-07-03
https://newsbusters.org/
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini chatbot) whose use has directly led to harm in the form of biased, misleading, and politically charged information dissemination. This harms communities by spreading misinformation and potentially violating rights to unbiased information. The article documents specific harmful outputs from the AI system, demonstrating realized harm rather than potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The AI system's development and use have caused these harms, fulfilling the definition of an AI Incident.
Thumbnail Image

Happy Fourth from Big Tech: Google's Gemini AI Chatbot Spews Anti-American Garbage on America's Birthday

2024-07-05
SGT Report
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini chatbot) is clearly involved, generating biased content. However, the article does not report any realized harm such as injury, rights violations, or operational disruption, nor does it present a credible risk of future harm beyond ideological bias. The focus is on the AI's biased outputs and the research critique thereof, which fits the definition of Complementary Information—providing supporting data and context about AI behavior and societal impact without a specific incident or hazard occurring.
Thumbnail Image

Exposing Gemini: 10 Responses Showing Google AI's Anti-American Bias - Conservative Angle

2024-07-03
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini chatbot) whose use has directly led to outputs that propagate biased and misleading information about American history and values. These outputs can harm communities by spreading misinformation and potentially infringing on rights related to fair and unbiased information access. The article provides concrete examples of the AI's biased responses, indicating realized harm rather than just potential risk. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused by biased and misleading content.