Google Gemini AI Generates Biased and Offensive Outputs, Prompting Outrage and Feature Suspension

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI system produced biased and offensive text and images, including racially inaccurate depictions and controversial comparisons, such as equating Elon Musk with Hitler and refusing to condemn pedophilia. These outputs caused public outrage, leading Google to suspend Gemini's image generation feature and commit to addressing the issues.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Gemini image generator) is explicitly involved and malfunctioning by producing historically inaccurate images that misrepresent historical facts. This misrepresentation can cause harm to communities by spreading misinformation and distorting historical understanding, which fits within harm to communities or violation of rights. The harm is realized as the AI has generated and disseminated these images. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.[AI generated]
AI principles
FairnessSafetyRobustness & digital securityRespect of human rightsAccountabilityTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer servicesIT infrastructure and hosting

Affected stakeholders
ConsumersGeneral public

Harm types
ReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

Business function:
Citizen/customer serviceResearch and development

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

Google retira Gemini IA porque ha generado nazis negros y otras inexactitudes históricas

2024-02-28
as
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini image generator) is explicitly involved and malfunctioning by producing historically inaccurate images that misrepresent historical facts. This misrepresentation can cause harm to communities by spreading misinformation and distorting historical understanding, which fits within harm to communities or violation of rights. The harm is realized as the AI has generated and disseminated these images. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Google admite que los errores de diversidad sesgada que mostró su IA son "inaceptables"

2024-02-28
infobae
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini chatbot) is explicitly mentioned and involved in generating biased and inaccurate content. The harm is realized as the AI's outputs have offended users and spread misinformation, which can be considered harm to communities and a violation of rights to accurate information. Google's internal acknowledgment and remedial actions confirm the incident's materialization. The event is not merely a potential risk or complementary information but a clear case of AI misuse or malfunction causing harm, thus classifying it as an AI Incident.
Thumbnail Image

Sundar Pichai está muy...pero muy avergonzado con Gemini - Digital Trends Español

2024-02-28
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini) is explicitly mentioned as generating biased and offensive images, which constitutes harm to communities and a violation of rights (bias and offensive content). The harm has already occurred, as evidenced by the suspension of the tool and public apology. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through biased and offensive outputs. The CEO's memo and planned corrective actions are responses to this incident, but the primary event is the harmful outputs themselves.
Thumbnail Image

El CEO de Google reconoce los problemas "completamente inaceptables" de su IA

2024-02-28
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's Gemini) generating biased and offensive content, which has led to user offense and public criticism. The CEO's acknowledgment confirms the AI system's outputs have caused harm by perpetuating prejudices and biases, which falls under harm to communities and possibly violations of rights. The harm is realized, not just potential, as offensive outputs have been publicly shared and caused reputational damage and user harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google admite que los errores de diversidad sesgada que mostró su IA son

2024-02-29
Listin diario
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini) that generated biased and historically inaccurate content, which led to public offense and reputational damage. The AI's outputs directly caused harm by spreading misinformation and biased representations, which can be considered harm to communities and a violation of informational rights. The company's response to disable the image generation feature and issue apologies confirms the recognition of harm. Hence, this event meets the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

El CEO de Google, Sundar Pichai, dice que su IA Gemini que funciona mal es "inaceptable"

2024-02-28
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The AI system Gemini is explicitly mentioned and is responsible for generating inaccurate and biased content that has offended users and spread misinformation. This constitutes harm to communities and a violation of rights, fulfilling the criteria for an AI Incident. The company's response and acknowledgment of the problem do not negate the fact that harm has already occurred. Hence, the event is classified as an AI Incident.
Thumbnail Image

Según se informa, Google pagó a medios de noticias para desatar una avalancha de fallas en IA

2024-02-27
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's generative AI platform) used in newsrooms to produce content. While it raises significant concerns about the potential harms of AI-generated news content—such as flooding the internet with low-quality AI content, undermining journalism, and lack of transparency—it does not document a specific incident where harm has occurred or a direct causal link to realized harm. Nor does it describe a near-miss or credible imminent risk that would qualify as an AI Hazard. Instead, it reports on Google's strategy and the media industry's response, which fits the definition of Complementary Information, providing context and societal/governance-related insights into AI's impact on journalism.
Thumbnail Image

Google pierde 90 mil millones de dólares por errores de Gemini

2024-02-28
El Universal
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini) is explicitly mentioned and involved in generating problematic image outputs. The harm is primarily reputational and financial (loss of market value), which does not fall under the defined categories of AI Incident harms such as injury, rights violations, or harm to communities. The company is responding to the issue, indicating ongoing mitigation efforts. Therefore, this event is best classified as Complementary Information, as it provides context and updates on the AI system's problematic outputs and the company's response, rather than describing a direct AI Incident or a plausible future hazard.
Thumbnail Image

Google señala que el generador de imágenes de personas con Gemini volverá a funcionar

2024-02-28
El Tiempo
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Gemini) used for generative image creation. The system's outputs previously led to harm in the form of inaccurate and offensive representations, which can be considered harm to communities and a violation of rights related to respectful and accurate representation. Although the harm is indirect and related to the AI's outputs, it has already occurred, prompting Google to pause the feature and work on improvements. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use. The article focuses on the incident and the response, not just a general update or future risk, so it is not merely Complementary Information.
Thumbnail Image

La debacle de Gemini y sus imágenes inclusivas le ha salido muy cara a Google: ha perdido 90.000 millones de dólares

2024-02-28
Xataka
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Gemini 1.5 image generator and chatbot) malfunctioning by generating inappropriate images (e.g., black people depicted as Nazi soldiers) and nonsensical chatbot responses. These malfunctions caused a direct negative impact on Google's stock market valuation, a significant financial harm. The AI systems' failures led to loss of trust and financial damage, which qualifies as harm to property (company value) and economic interests. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI systems' malfunction and use.
Thumbnail Image

Google utilizará todo el contenido de Reddit para entrenar a su inteligencia artificial

2024-02-26
20 minutos
Why's our monitor labelling this an incident or hazard?
The article describes Google's use of Reddit data to train AI systems, which involves AI system development and use. However, it does not describe any actual harm or incident caused by this use, nor does it explicitly warn of plausible future harm. The privacy concerns are general and do not indicate a breach or violation that has occurred. The event is informational about AI training data sourcing and system improvement, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Diversidad sesgada que mostró IA de Google son "inaceptables"

2024-02-29
El Nacional
Why's our monitor labelling this an incident or hazard?
The AI system Gemini is explicitly mentioned and is responsible for generating biased and historically inaccurate content, which constitutes misinformation and harm to communities by distorting historical facts. This meets the criteria for an AI Incident because the AI's use has directly led to harm (misinformation and bias) and has caused offense and reputational damage. The company's response and mitigation efforts are noted but do not negate the occurrence of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google admite que errores de diversidad sesgada que mostró su IA son "inaceptables"

2024-02-28
Gestión
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini chatbot) that has produced biased and inaccurate outputs, which have been publicly disseminated and caused harm by spreading misinformation and offending users. This constitutes a violation of the right to accurate information and harms communities by distorting historical facts. The harm is realized and directly linked to the AI system's outputs. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and a breach of informational rights.
Thumbnail Image

Google dice que los errores de diversidad sesgada que mostró su IA son "inaceptables" | Noticias de México | El Imparcial

2024-02-29
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini chatbot) is explicitly mentioned and is responsible for generating biased and inaccurate content, which has led to harm in terms of misrepresentation and potential violation of rights related to fairness and non-discrimination. The harm is realized as the biased outputs have caused user discomfort and public backlash, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case of harm caused by the AI's outputs.
Thumbnail Image

Elon Musk es comparado con Hitler por la IA de Google Gemini y el magnate reacciona

2024-02-27
FayerWayer
Why's our monitor labelling this an incident or hazard?
Google Gemini is an AI system (a chatbot) whose responses have directly caused public outrage and ethical concerns. The AI's refusal to clearly condemn pedophilia and its controversial comparison of Elon Musk to Hitler represent outputs that can harm communities by spreading offensive or dangerous ideas. This constitutes harm to communities and a violation of ethical standards, fitting the definition of an AI Incident. The involvement is through the AI's use and its malfunction or failure to provide appropriate responses. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El CEO de Google a sus empleados: "Los errores de Gemini son inaceptables"

2024-02-28
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini) is explicitly mentioned and is responsible for generating inaccurate and racially diverse images that misrepresent historical figures and cofounders of Google. These outputs constitute harm to communities by spreading misinformation and potentially violating ethical standards. The harm is realized as the company paused the feature and issued apologies, indicating the problem's materialization. The CEO's memo and company actions confirm the AI system's malfunction and its direct role in causing harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google se disculpa y explica cómo generó una IA racista

2024-02-26
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Gemini) whose outputs caused harm by generating racially biased content, which is a violation of rights and harms communities. The harm has already occurred as users experienced and reported the biased behavior. The company's response and apology confirm the recognition of the AI system's malfunction and its impact. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

La IA de Google no puede generar imágenes de humanos

2024-02-28
La opinion de Murcia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Google's Gemini) whose use has led to biased and inaccurate image generation, which can be considered a form of harm to communities through misinformation and reinforcement of prejudices. However, the article does not report any concrete incident of harm occurring, such as injury, legal violations, or significant societal disruption. Instead, it focuses on the AI's limitations, user complaints, and Google's mitigation efforts. Therefore, it does not meet the threshold for an AI Incident. The potential for future harm is noted but remains speculative without a specific event or credible risk scenario detailed. The article primarily provides complementary information about the AI system's current challenges and responses.
Thumbnail Image

Google tiene otro problema 'woke' de IA con Gemini... y va a ser difícil de solucionar

2024-02-28
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini chatbot) whose outputs have been criticized for ideological bias, which is a recognized issue in AI ethics and fairness. However, the article does not report any direct or indirect harm to individuals, communities, or infrastructure resulting from these biases. Instead, it describes Google's acknowledgment of the problem and attempts to fix it, including pausing image generation features and adjusting model behavior. This fits the definition of Complementary Information, as it provides context and updates on an AI system's challenges and responses without reporting a new incident or hazard causing or plausibly leading to harm.
Thumbnail Image

El CEO de Google califica como "inaceptable" el error de Gemini AI ante los empleados

2024-02-29
Sur Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gemini AI) that generated biased and offensive images, causing harm to users. The harm is realized as users were offended and the function was disabled to prevent further harm. The CEO's acknowledgment and corrective actions confirm the incident's materialization. The AI system's malfunction (biased outputs) directly led to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

28 febrero, 2024

2024-02-28
esdelatino.com
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system generating text and images. The reported issues include biased and offensive outputs, inaccuracies, and racial stereotyping, which have caused harm to users and communities by spreading harmful stereotypes and misinformation. The CEO's acknowledgment confirms the AI's role in causing these harms. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's outputs.
Thumbnail Image

Read more

2024-02-29
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The Gemini AI system is explicitly involved as the chatbot and image generator causing racially insensitive outputs and failing to perform expected functions like song recognition. The cultural insensitivity can be considered a form of harm to communities or violation of rights, but the article does not report widespread or severe harm, injury, or legal violations. The malfunction and poor performance cause user frustration and potential reputational harm to Google but do not meet the threshold for an AI Incident as defined. The article primarily reports on current issues and user experience problems without describing a resolved or ongoing harm incident or a credible future hazard. Therefore, it is best classified as Complementary Information, providing context and updates on the AI system's performance and issues.
Thumbnail Image

Google se compromete a resolver los problemas de Gemini | Benzinga España

2024-02-28
Benzinga España
Why's our monitor labelling this an incident or hazard?
The article reports on Google's internal recognition of issues with the AI system Gemini generating biased and inaccurate images, which led to disabling a feature and committing to improvements. While harm related to biased or inaccurate AI outputs is implied, the article focuses on the company's response and remediation efforts rather than detailing specific realized harms or incidents causing injury, rights violations, or community harm. Therefore, this is best classified as Complementary Information, as it provides an update on mitigation and governance responses to a previously identified AI problem rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

بعد إصدار ردود "غير مقبولة"..غوغل تقوم بإصلاح منصة الذكاء الاصطناعي

2024-03-01
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The AI system 'Gemini' generated biased and inaccurate responses and images, which harmed users by providing unacceptable content. This constitutes harm to users (a form of harm to persons) and thus qualifies as an AI Incident. The company's response to fix these issues is ongoing, but the harm has already occurred due to the biased and inaccurate outputs. Therefore, this event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جوجل تعمل على إصلاح Gemini AI بعد وصف رئيس الشركة بعض الردود بـ"غير المقبولة" - اليوم السابع

2024-02-28
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (Gemini AI) that has produced harmful outputs such as misinformation (inaccurate historical images) and biased responses, which can be considered harm to users and communities. Since harm has already occurred and the company is responding to it, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جوجل تعمل على إصلاح أداة Gemini AI

2024-02-28
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Gemini AI produced biased and unacceptable responses, including inaccurate images and content that harmed users. Google's response to fix these issues confirms that harm has occurred. The involvement of an AI system (Gemini AI) and the direct link to harm (biased and offensive outputs) meet the criteria for an AI Incident. The other details about Reddit's data deal are complementary context but do not change the classification.
Thumbnail Image

"ردود غير مقبولة وإجابات متحيزة".. جوجل تخطط لإعادة إطلاق Gemini AI بعد تعليقات المستخدمين | المصري اليوم

2024-02-29
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gemini AI) that generated biased and offensive outputs, which caused harm to users in the form of offensive and inaccurate content. This harm to users' experience and potential reputational damage qualifies as harm to communities or individuals. Since the AI system's outputs have directly led to these harms, this event qualifies as an AI Incident. The company's response and planned relaunch are complementary but the main event is the harm caused by the AI system's outputs.
Thumbnail Image

جوجل تُصلح إجابات غير مقبولة بمنصة الذكاء الاصطناعي | صحيفة المواطن الالكترونية للأخبار السعودية والخليجية والدولية

2024-02-28
صحيفة المواطن الإلكترونية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Gemini') whose outputs caused harm in the form of biased and unacceptable content, which can be considered harm to users and communities. The company acknowledged the problem and is actively working to address it. Since the harm has already occurred through the biased and inaccurate outputs, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

بسبب الردود المتحيزة..غوغل تعمل على إصلاح منصة الذكاء الاصطناعي

2024-02-29
Aabbir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gemini) generating biased and inaccurate outputs that have harmed users, which fits the definition of an AI Incident due to violations of acceptable use and potential harm to users (harm to people or communities). The company's response to fix the issues is ongoing, but the harm has already occurred. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google е затънала в противоречия заради отговорите на AI чатбота си Gemini

2024-02-28
Bloomberg
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system (a chatbot based on advanced AI technology) whose use has directly led to harms such as dissemination of inaccurate and biased information, which can harm communities and violate rights to accurate information. The article details realized harms from the AI's outputs, including public controversy and reputational damage, as well as the company's apology and mitigation efforts. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm through misinformation and bias.
Thumbnail Image

Google е затънала в противоречия заради отговорите на AI чатбота си Gemini

2024-02-28
Bloomberg
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system (a chatbot based on advanced AI technology) whose use has directly led to harms including dissemination of inaccurate information, biased content, and reputational damage to Google. The harms include misinformation (harm to communities) and potential violation of user trust, which aligns with harm categories (d) and (c). The article details realized harms from the AI system's outputs, not just potential risks or general commentary. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Шефът на Google за Gemini: Отговорите са неприемливи

2024-02-29
Vesti.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Gemini, an AI system, generates unacceptable and biased responses that have offended users and displayed bias related to race and historical events. This constitutes harm to communities and potentially a violation of rights (discrimination and bias). The harm is realized as users have been offended and the company has taken action by pausing features and working on improvements. Hence, the event meets the criteria for an AI Incident, as the AI system's use has directly led to harm.
Thumbnail Image

Новият ИИ на "Гугъл" не знае дали Хитлер е по-лош от Мъск

2024-02-28
epicenter.bg
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini chatbot) is explicitly involved and its use led to harm in the form of spreading inappropriate, offensive, and socially harmful content. This constitutes harm to communities and potentially violates norms related to human rights and dignity. The incident is a direct consequence of the AI system's outputs and its failure to appropriately handle sensitive topics, thus meeting the criteria for an AI Incident. The fact that Google later corrected the problem does not negate the occurrence of harm when the outputs were publicly available and caused concern.
Thumbnail Image

Съмнителни отговори на чатбота Gemini разгневиха потребители на Google

2024-02-28
Bgonair
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system (a chatbot based on advanced AI technology) whose use has led to controversial and inaccurate outputs, causing user dissatisfaction and public debate. However, the article does not report any direct or indirect harm such as physical injury, legal rights violations, or significant community harm resulting from these outputs. The issues are recognized as challenges inherent to AI language models, with ongoing efforts to mitigate them. This fits the definition of Complementary Information, as it provides context and updates on AI system behavior and responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Изкуственият интелект на Google не знае дали Илон Мъск е по-лош от Адолф Хитлер - Informiran.net

2024-02-28
Informiran.net
Why's our monitor labelling this an incident or hazard?
The article details specific failures of the AI system Gemini in generating content and answering questions, which are problematic and have social implications. However, the harms described are reputational and ethical concerns rather than direct or indirect harms such as injury, rights violations, or disruption. The company's response to restrict image generation indicates mitigation efforts. Since the event focuses on the AI's problematic outputs and the response rather than a realized harm incident or a plausible future harm, it fits the definition of Complementary Information.