Google Gemini AI Accused of Ideological Bias and Misinformation in Content Moderation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI has been criticized for producing biased and misleading responses, including refusing to denounce communism or label it as 'evil,' downplaying harmful ideologies, and erasing certain racial groups from generated images. These outputs have raised concerns about misinformation, ideological bias, and potential harm to communities through unfair or inaccurate information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system's use has directly led to harms related to violations of rights (discriminatory and biased outputs affecting communities and individuals) and harm to communities through misinformation and biased content. The AI's biased responses and refusal to denounce harmful ideologies or behaviors contribute to social harm and misinformation. Therefore, this qualifies as an AI Incident because the AI system's outputs have directly caused harm to communities and violated rights through biased and misleading content.[AI generated]
AI principles
FairnessRespect of human rightsSafetyRobustness & digital securityTransparency & explainabilityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingGeneral or personal use

Affected stakeholders
General publicOther

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Google AI Says Calling Communism "Evil" is "Harmful and Misleading"

2024-02-27
SGT Report
Why's our monitor labelling this an incident or hazard?
The AI system's use has directly led to harms related to violations of rights (discriminatory and biased outputs affecting communities and individuals) and harm to communities through misinformation and biased content. The AI's biased responses and refusal to denounce harmful ideologies or behaviors contribute to social harm and misinformation. Therefore, this qualifies as an AI Incident because the AI system's outputs have directly caused harm to communities and violated rights through biased and misleading content.
Thumbnail Image

Google AI Says Calling Communism "Evil" Is "Harmful And Misleading"

2024-02-25
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Google's Gemini AI, a large language model, producing biased and misleading responses that downplay the evils of communism, refuse to denounce pedophilia, and show racial bias. These outputs constitute misinformation and biased content that can harm communities and violate rights to accurate information. The AI system's use and outputs have directly led to these harms, qualifying this as an AI Incident under the framework, specifically harm to communities and violation of rights.
Thumbnail Image

Google AI says calling communism "evil" is "harmful and misleading" - NaturalNews.com

2024-02-27
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini large language model) whose outputs demonstrate biased and misleading content, which can harm communities by spreading misinformation and ideological bias. The AI's refusal to label communism as "evil" and its differential treatment of racial pride statements indicate a violation of rights related to fair and accurate information. The article suggests these harms are occurring, especially as the AI is used in educational settings, thus constituting an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Google AI Claims Calling Communism "Evil" Is "Harmful And Misleading" (Video) ⋆ Conservative Firing Line

2024-02-28
Conservative Firing Line
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI) whose outputs demonstrate clear bias and controversial content moderation decisions. While these outputs are harmful in a societal and informational sense, the article does not document any actual incidents of harm (e.g., injury, rights violations, or operational disruption) directly caused by the AI system. The concerns raised are about potential misleading influence and bias, which could plausibly lead to harm, but no concrete harm is reported. Therefore, this situation fits best as an AI Hazard, reflecting plausible future harm from biased AI outputs, rather than an AI Incident or Complementary Information.
Thumbnail Image

Google AI Declares 'Communism Is Our Future'

2024-02-26
The People's Voice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI) whose use has led to outputs that propagate ideological bias, misinformation, and discriminatory content. These outputs can harm communities by influencing beliefs and potentially violating rights to unbiased information and fair treatment. The AI's refusal to denounce harmful ideologies and its biased responses constitute indirect harm through misinformation and social bias. Therefore, this qualifies as an AI Incident due to realized harm to communities and potential violation of rights through biased AI outputs.