Google Gemini AI Faces Backlash Over Political and Racial Bias

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI chatbot generated biased and historically inaccurate images, sparking public outrage over political and racial bias. The incident led to user offense and company acknowledgment, with CEO Sundar Pichai apologizing for the harm caused by the AI's outputs and promising corrective action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions Google's AI chatbot Gemini producing biased and offensive outputs, which have led to public offense and company action. The AI system's outputs have directly caused harm by spreading misinformation and biased content, impacting communities and potentially violating rights. The company's acknowledgment and apology confirm the AI system's role in causing harm. Although the article also discusses broader political bias concerns at Google, the central AI-related harm is the biased behavior of the Gemini chatbot, making this an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
Consumers

Harm types
PsychologicalReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

Google Gemini: AI fiasco reignites concerns of political bias at tech company dating back to Trump's victory

2024-03-01
Fox Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Google's AI chatbot Gemini producing biased and offensive outputs, which have led to public offense and company action. The AI system's outputs have directly caused harm by spreading misinformation and biased content, impacting communities and potentially violating rights. The company's acknowledgment and apology confirm the AI system's role in causing harm. Although the article also discusses broader political bias concerns at Google, the central AI-related harm is the biased behavior of the Gemini chatbot, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google CEO Pledged to Use AI to Combat Trumpism

2024-03-01
Townhall
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (machine learning, AI for drone target tracking) being used to manipulate information and influence elections, which constitutes a violation of rights and harm to communities. The AI's role in algorithmic ranking and censorship is central to the described harms. The involvement of AI in government contracts for military and intelligence purposes further supports the classification. Since these harms are occurring or have occurred, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Gemini: AI fiasco reignites concerns of political bias at tech company dating back to Trump's victory - Conservative Angle

2024-03-01
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini chatbot) is explicitly mentioned and involved in generating biased and historically inaccurate images, which is a direct output of the AI system. The harm is realized as the biased outputs offend users and reflect political and racial bias, which can be considered harm to communities and a violation of rights. Google's acknowledgment and apology confirm the issue's materialization. Although the article includes broader political bias concerns at Google, the AI-related harm centers on Gemini's biased image generation. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'AI di Google ha scatenato una guerra culturale

2024-03-01
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini) and discusses its use and outputs. The outputs caused public controversy due to perceived bias, which is a form of harm related to fairness and representation. However, the article does not describe any direct or indirect harm that meets the criteria for an AI Incident, such as violations of human rights or significant harm to communities. Nor does it describe a plausible future harm scenario that would qualify as an AI Hazard. Instead, it details the company's acknowledgment of the problem, public reactions, and ongoing efforts to address bias, which fits the definition of Complementary Information as it relates to societal and governance responses to AI issues.
Thumbnail Image

La 'Guerra dei valori' dell'Intelligenza Artificiale è appena iniziata

2024-03-03
Millionaire
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) whose use led to biased and potentially discriminatory outputs, causing public outrage and reputational damage. The AI system's development and use directly resulted in harm related to bias and discrimination, which falls under violations of rights and harm to communities. The company acknowledged the issue and took remedial action, but the harm had already occurred. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dentro il fallimento di Gemini, il chatbot di Google che disegna vichinghi neri, papesse e nazisti multietnici

2024-03-05
Forbes Italia
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system explicitly described as generating images and text. Its outputs have directly caused harm by spreading biased, misleading, or offensive content, which affects communities and public discourse. The harms are realized, not merely potential, as evidenced by public criticism, internal company responses, and suspension of features. The incident involves the AI system's use and malfunction, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

AI: le cantonate troppo politicamente corrette di Gemini - thedotcultura

2024-03-05
thedotcultura
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Gemini) generating biased and politically incorrect content, which is a direct result of its use and training data. The harms are related to social and reputational damage due to biased outputs, which can be considered violations of rights or harm to communities. However, the article does not report a specific incident where these outputs caused direct or significant harm, nor does it describe a near-miss or credible future risk scenario. Instead, it focuses on the company's acknowledgment, apology, and ongoing efforts to fix the issues. This aligns with the definition of Complementary Information, as it updates on the AI system's challenges and mitigation measures rather than reporting a new AI Incident or AI Hazard.