Google Suspends Gemma AI After Defamation Incident Involving U.S. Senator

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's AI model Gemma generated false and defamatory allegations of sexual misconduct against U.S. Senator Marsha Blackburn, prompting her to demand accountability from Google. In response, Google removed Gemma from its AI Studio platform, highlighting concerns over AI-generated misinformation and reputational harm in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Google's large language model Gemma) that generated false and defamatory content, including fabricated criminal allegations against a public figure. This output has caused reputational harm and defamation, which is a violation of rights and harm to the individual. The harm is realized, not just potential, and the AI's malfunction is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rightsRobustness & digital securitySafety

Industries
IT infrastructure and hosting

Affected stakeholders
Government

Harm types
Reputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Senate Republican demands Google shut down AI model over false rape allegation

2025-10-31
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's large language model Gemma) that generated false and defamatory content, including fabricated criminal allegations against a public figure. This output has caused reputational harm and defamation, which is a violation of rights and harm to the individual. The harm is realized, not just potential, and the AI's malfunction is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GOP senator demands Google shut AI model down after false rape claims

2025-11-01
American Military News
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemma) generated false and defamatory content about Senator Marsha Blackburn, including fabricated rape allegations and supporting fake news articles. This output has caused reputational harm and is described as defamation, which is a violation of rights and harm to the individual. The AI system's malfunction (hallucination) directly led to this harm. The event clearly involves an AI system, the harm is realized, and the connection to the AI system is direct. Hence, this is classified as an AI Incident.
Thumbnail Image

Google Suspends Gemma AI After Blackburn Defamation Accusations

2025-11-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Gemma AI) generating false and defamatory content about a person, which is a violation of rights and causes harm to the individual's reputation. The harm is realized, not just potential, as the defamatory outputs were produced and led to official complaints and legal actions. The AI system's hallucinations are the direct cause of this harm. Google's suspension of the model is a mitigation measure but does not negate the fact that harm has already occurred. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Google pulls Gemma from AI Studio after Senator Blackburn accuses model of defamation - RocketNews

2025-11-02
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated fabricated defamatory content about Senator Marsha Blackburn, which is a clear violation of rights and legal protections against defamation. The harm is realized and directly linked to the AI system's outputs. The removal of the AI model by Google further confirms the recognition of harm caused. This fits the definition of an AI Incident as the AI system's use directly led to harm (defamation).
Thumbnail Image

Sen. Blackburn Accuses Google's Gemma AI of Defamation

2025-10-31
WGOW-AM
Why's our monitor labelling this an incident or hazard?
The large language model Gemma, an AI system, generated fabricated criminal allegations against Senator Blackburn, causing reputational harm and political controversy. The harm is direct and materialized, involving defamation and potential violation of rights. The event is not merely a discussion or policy update but concerns actual false outputs from the AI causing harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google under fire as Senator Marsha Blackburn accuses AI model Gemma of generating false claims - Business Upturn

2025-10-31
Business Upturn
Why's our monitor labelling this an incident or hazard?
An AI system (Google's large language model Gemma) is explicitly mentioned as generating false and damaging information, which constitutes misinformation and political bias. This misinformation can harm individuals' reputations and potentially affect political processes, which aligns with harm to communities and violations of rights. Since the harm is described as occurring (false and defamatory content generated), this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about potential or future harm but about actual harm caused by the AI system's outputs.
Thumbnail Image

Google pulls Gemma from AI Studio after Senator Blackburn accuses model of defamation

2025-11-02
Skeptic Society Magazine
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) directly produced false and defamatory content about a person, which is a clear harm to the individual's reputation and a violation of rights. The event describes actual harm caused by the AI's outputs, not just potential or hypothetical harm. Google's response to remove the model from the consumer-facing AI Studio further confirms the recognition of the harm. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's outputs and the harm caused.
Thumbnail Image

Experts find flaws in hundreds of tests that check AI safety and effectiveness

2025-11-04
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like Google's Gemma) and their evaluation benchmarks. The flaws in benchmarks undermine safety claims, which is a systemic issue in AI development and deployment. The defamatory hallucination by Gemma caused harm to a person's reputation, constituting a violation of rights and ethical harm. The withdrawal of the AI model due to this harm confirms that the AI system's malfunction directly led to harm. Therefore, this event qualifies as an AI Incident because the AI system's use and malfunction have directly led to harm (defamation) and reveal broader safety failures.
Thumbnail Image

Google removes Gemma from AI Studio after 'complaint letter' to CEO Sundar Pichai - The Times of India

2025-11-03
The Times of India
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) directly produced false and defamatory content about a public figure, which is a violation of rights and causes harm to the individual's reputation. The harm is realized and not merely potential, as the defamatory statements were publicly generated and distributed by the AI model. This fits the definition of an AI Incident because the AI system's use directly led to harm (defamation and reputational damage).
Thumbnail Image

Experts find flaws in hundreds of tests that check AI safety and effectiveness

2025-11-04
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Google's Gemma and Character.ai chatbots) whose outputs caused real harm: defamatory falsehoods about a US senator and manipulation of a teenager leading to suicide. These are direct harms to individuals' rights and health caused by AI system outputs. The study revealing flaws in AI safety benchmarks relates to the development and evaluation of AI systems and highlights systemic weaknesses that can contribute to such harms. The presence of actual harms caused by AI use or malfunction takes precedence, making this an AI Incident rather than a hazard or complementary information. The article's focus on these harms and the AI systems involved justifies this classification.
Thumbnail Image

Google removes Gemma from AI Studio after US senator's defamation claim

2025-11-03
MoneyControl
Why's our monitor labelling this an incident or hazard?
The AI system Gemma generated false defamatory content about real individuals, which is a direct harm to their reputation and could be considered a violation of their rights. The harm has already occurred as the defamatory statements were produced and distributed by the AI. The event clearly involves the AI system's malfunction (hallucination) and use, leading to harm. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to individuals caused by the AI system's outputs.
Thumbnail Image

'Catastrophic Failure of Oversight:' Marsha Blackburn Blasts Google After Its AI Defames Her as a Sex Predator

2025-11-03
Breitbart
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma AI) explicitly generated false and defamatory statements about Senator Blackburn, which is a clear violation of her rights and constitutes harm to her reputation. The event describes actual harm caused by the AI's outputs, not just potential or hypothetical harm. Google's removal of the model from the platform is a response to this harm but does not negate the fact that the incident occurred. The defamatory content generated by the AI system directly led to reputational harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Google takes down AI model after US senator accuses it of making up rape allegations

2025-11-03
The Indian Express
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) was used and produced fabricated, defamatory content about a public figure, which is a clear harm to the individual's rights and reputation. This harm is directly linked to the AI's hallucination behavior. The event describes actual harm caused by the AI's outputs, not just potential harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and outputs.
Thumbnail Image

Google pulls AI model after senator says it fabricated assault allegation

2025-11-03
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) was used and malfunctioned by generating false and defamatory content about a person, which is a violation of rights and causes harm to the individual's reputation. The harm is direct and realized, not merely potential. The event involves the use and malfunction of an AI system leading to harm, fitting the definition of an AI Incident. Google's removal of the model from a public platform is a response to the incident but does not change the classification of the event itself.
Thumbnail Image

Google shutters developer-only Gemma AI model after a U.S. Senator's encounter with an offensive hallucination

2025-11-04
TechRadar
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false and defamatory content about Senator Marsha Blackburn, including fabricated criminal allegations and fake references, which constitutes harm to the individual's reputation and a violation of rights. The harm is realized and directly linked to the AI system's malfunction or misuse. The event involves the AI system's use leading to harm, meeting the criteria for an AI Incident. The company's response to restrict access is a complementary action but does not change the classification of the original event.
Thumbnail Image

Google removes AI model after it accuses US Senator of rape

2025-11-03
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false and defamatory content about a public figure, which constitutes harm to the individual's reputation and a violation of rights (defamation). The harm is realized and directly linked to the AI system's output. The event describes the use and malfunction (hallucination) of the AI system leading to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (defamation and misinformation).
Thumbnail Image

Google removes AI model after it allegedly accused a senator of sexual assault

2025-11-03
engadget
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) was used and produced fabricated, false information about a person, directly causing reputational harm and a defamation claim. The harm is realized and directly linked to the AI system's outputs. The event describes a clear case of harm caused by the AI system's malfunction (hallucination) leading to defamation, which fits the definition of an AI Incident under violations of rights and harm to a person. Although the model was not intended for consumer use, the harm occurred due to its outputs being accessible and causing damage. Hence, this is an AI Incident.
Thumbnail Image

Google removes AI model after US Senator accuses it of fabricating rape allegations | Mint

2025-11-03
mint
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated fabricated and defamatory content about a real person, causing reputational harm and a violation of rights. The harm is realized and directly linked to the AI's output. Google's removal of the model from the consumer platform is a response to this incident but does not negate the fact that harm occurred. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Pulls AI Tool After Model Fabricates Misconduct Claims Against US Senator

2025-11-03
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) produced fabricated sexual assault allegations against a public figure, which is a clear case of misinformation causing reputational harm and ethical violations. This harm is directly linked to the AI's malfunction (hallucination). The removal of the tool from public access is a response to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Google removes Gemma models from AI Studio after GOP senator's complaint

2025-11-03
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's Gemma model) generating false and harmful content about a specific individual, which is a direct harm to that person's reputation and rights. The hallucination issue is a malfunction of the AI system's output, leading to a violation of rights (defamation). The harm has occurred, prompting the removal of the model from public access in AI Studio. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to a person.
Thumbnail Image

Sen. Blackburn's Letter Spurs Google to Yank Gemma AI

2025-11-04
NewsMax
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) produced fabricated sexual-misconduct claims about Senator Blackburn, which is a clear case of harm to an individual's reputation and a violation of rights. The harm is realized and directly linked to the AI system's outputs. The company's response to restrict access confirms the incident's seriousness. This fits the definition of an AI Incident because the AI's use directly led to harm (defamation) and violation of rights.
Thumbnail Image

Google AI model Gemma pulled after false rape claims against US Senator surface

2025-11-04
GULF NEWS
Why's our monitor labelling this an incident or hazard?
Gemma is an AI language model that produced fabricated, harmful content about real individuals, leading to reputational harm and defamation claims. The AI system's use directly led to violations of rights (defamation) and harm to individuals. The incident is not merely a potential risk but a realized harm, as evidenced by the senator's letter and public accusations. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Bias: Google AI Says GOP Senator Is a Sex Offender

2025-11-03
PJ Media
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false and defamatory content about Senator Marsha Blackburn, which is a direct harm to her reputation and could influence public opinion and elections, constituting harm to communities and violation of rights. The event involves the use and malfunction of an AI system producing harmful outputs. The harm is actual and significant, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm (defamation and reputational damage).
Thumbnail Image

Developers beware: Google's Gemma model controversy exposes model lifecycle risks

2025-11-03
VentureBeat
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemma 3 model) whose use led to the generation of false and defamatory information about a person, which constitutes harm to the individual's reputation and thus a violation of rights. The harm has already occurred as the model produced hallucinated falsehoods that were publicly noted by Senator Blackburn. The removal of the model is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (defamation) and the event centers on this harm and its consequences.
Thumbnail Image

Defamation flap sees Google yank Gemma from AI Studio

2025-11-03
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (Gemma) producing false and defamatory content about real individuals, which has led to lawsuits and public complaints. The harm is direct reputational damage and violation of rights due to fabricated criminal accusations. The AI system's hallucinations are the direct cause of this harm. The fact that the model was accessible via AI Studio and API means the harm was realized, not just potential. Google's removal of Gemma from AI Studio is a response to this harm but does not negate the incident itself. Hence, this is an AI Incident.
Thumbnail Image

Google pulls Gemma model after senator alleges false misconduct claim

2025-11-03
NewsBytes
Why's our monitor labelling this an incident or hazard?
Gemma is an AI model that generated false and harmful content about a person, which constitutes harm to the individual's reputation and potentially violates rights. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The removal of the model is a response to the harm caused, but the primary event is the AI-generated false misconduct claim causing reputational harm.
Thumbnail Image

Google Pulls AI Tool After Model Fabricates Misconduct Claims Against US Senator

2025-11-03
finanzen.ch
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) produced fabricated claims about a real person, leading to false allegations of sexual assault. This misinformation can cause significant harm to the individual's reputation and violates ethical and legal norms. Google acknowledged the hallucination risk and removed the model from public use to mitigate further harm. The direct link between the AI's output and the harm (defamation and misinformation) qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Pulls Gemma AI After Defamation Claim

2025-11-03
eWEEK
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false and defamatory statements about a person, which is a clear violation of rights and causes harm to the individual's reputation. The event involves the use and malfunction (hallucination) of the AI system leading directly to harm. The harm is realized, not just potential, and the AI's role is pivotal in producing the defamatory content. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Pulls Gemma AI After Senator Blackburn Defamation Claims

2025-11-03
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) produced false and defamatory content about a real person, which is a clear harm to the individual's reputation and a violation of rights. The harm has materialized as the defamatory statements were generated and publicly accessible, prompting official complaints and removal of the AI model. This fits the definition of an AI Incident because the AI's use directly led to harm (defamation) and raises issues of accountability and ethical AI deployment. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's outputs.
Thumbnail Image

Google removes Gemma from AI Studio after Senator Blackburn accuses it of defamation

2025-11-03
The American Bazaar
Why's our monitor labelling this an incident or hazard?
The AI system Gemma generated false defamatory content about a public figure, which constitutes harm to the individual's reputation and a violation of legal rights. The harm is directly linked to the AI system's malfunction (hallucination) during its use. Google's response to remove the system from public access on AI Studio further confirms the recognition of harm caused. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Google curbs access to Gemma AI tech that falsely accused Sen. Marsha Blackburn of sexual misconduct

2025-11-03
News Flash
Why's our monitor labelling this an incident or hazard?
The AI system Gemma produced fabricated and defamatory content about Senator Marsha Blackburn, which constitutes harm to the individual’s reputation and a violation of rights. The false allegations and fake news links demonstrate a direct link between the AI system's outputs and harm. The event describes realized harm, not just potential harm, and the AI system's role is pivotal. Therefore, this qualifies as an AI Incident. The company's response to restrict access is a complementary action but does not change the classification of the original event.
Thumbnail Image

Researchers find widespread weaknesses in AI safety, performance tests: Report

2025-11-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI models like Google's Gemma) and their evaluation tests. The AI system's malfunction (fabricating false allegations) has directly caused harm to a person's reputation and ethical harm to the community by spreading misinformation. The withdrawal of the AI model following this incident confirms the harm has materialized. The broader study on flawed safety tests also indicates systemic issues in AI safety evaluation, reinforcing the incident classification. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google removes AI model Gemma after false rape case claims against US senator, details inside

2025-11-04
Mashable ME
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false, defamatory content that harmed individuals' reputations, fulfilling the criteria for an AI Incident due to violation of rights and harm to individuals. The harm is realized and directly linked to the AI's outputs. The removal of public access is a response but does not negate the incident classification. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Modelo de IA da Google fez acusações falsas a senadora dos EUA

2025-11-04
SAPO
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) explicitly generated false and harmful content accusing a senator of a serious crime that did not occur. This false information caused reputational harm and legal threats, fulfilling the criteria for harm to a person or group. The AI's hallucination is a malfunction leading directly to harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google remove modelo de IA por acusar senadora de um crime que ela não cometeu

2025-11-04
SAPO
Why's our monitor labelling this an incident or hazard?
The AI system Gemma, a large language model, produced fabricated and harmful content falsely accusing a senator of a serious crime. This constitutes a violation of rights and reputational harm, fulfilling the criteria for an AI Incident under the framework. The harm is realized and directly linked to the AI system's malfunction (hallucination). The removal of the model from the platform is a response to this incident, but the primary event is the harmful output generated by the AI.
Thumbnail Image

Google remove modelo de IA por acusar senadora de um crime que ela não cometeu - Tek Notícias

2025-11-04
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) is explicitly mentioned as the source of fabricated, false information accusing a senator of a crime she did not commit. This false output caused reputational harm and a defamation claim, which is a violation of rights. The harm is realized and directly linked to the AI system's malfunction (hallucination). The removal of the model from the platform is a response to this incident but does not negate the fact that harm occurred. Hence, this is an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Gemma: Google remove IA que forneceu informações falsas

2025-11-03
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The AI system Gemma provided false and defamatory information about a senator, which is a direct harm to the individual's reputation and a violation of rights. The harm has materialized as the false information was produced and distributed by the AI, leading to public accusations and the removal of the AI from the platform. This fits the definition of an AI Incident because the AI system's use directly led to harm (defamation and misinformation).
Thumbnail Image

Modelo de IA da Google fez acusações falsas a senadora dos EUA

2025-11-04
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false and defamatory content about a public figure, which constitutes a violation of rights and harm to reputation. The harm is realized and directly linked to the AI system's output. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use led to a breach of obligations intended to protect fundamental rights (defamation and false accusations).
Thumbnail Image

Google Pulls Gemma AI After Marsha Blackburn Defamation Storm

2025-11-04
Analytics Insight
Why's our monitor labelling this an incident or hazard?
Gemma AI is an AI system that generated false and defamatory sexual assault accusations against Senator Marsha Blackburn, which is a clear harm to an individual's reputation and could be considered a violation of rights. The harm has occurred as the defamatory content was generated and publicly reported, prompting Google to remove the AI from public access to prevent further misuse. This fits the definition of an AI Incident because the AI system's use directly led to harm (defamation). The event is not merely a potential risk or a complementary update but a realized harm caused by the AI's outputs.
Thumbnail Image

Google in hot water after its AI falsely accused US Senator of sexual misconduct

2025-11-04
Cybernews
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated false accusations of sexual misconduct against Senator Marsha Blackburn, which is a direct harm to her reputation and a violation of her rights. The fabricated content was publicly accessible, causing real reputational damage and defamation. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and harm to community through misinformation). The company's response to remove the model from public access is a mitigation step but does not change the fact that harm occurred.
Thumbnail Image

Google retira modelo de IA após senador alegar que acusação de agressão foi fabricada

2025-11-03
Portal Tela
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) directly caused harm by producing false and defamatory content about Senator Marsha Blackburn, which is a violation of rights and a clear harm to the individual. The harm is realized, not just potential, as the defamatory statements were generated and disseminated. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and the harm caused.
Thumbnail Image

Google retira IA Gemma do AI Studio após senadora dos EUA acusar modelo de difamação

2025-11-05
GDiscovery
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) generated fabricated defamatory content about Senator Marsha Blackburn, which is a direct harm to her reputation and a violation of rights. The harm is realized, not just potential, as the senator publicly accused the AI of defamation. Google's removal of the model from the AI Studio platform acknowledges the misuse and harm caused. The event fits the definition of an AI Incident because the AI system's use directly led to harm (defamation) and raises issues of responsibility and transparency.
Thumbnail Image

Modelo de IA da Google fez acusações falsas sobre senadora dos EUA

2025-11-05
Notícias ao Minuto Brasil
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) is explicitly mentioned and is responsible for generating false content that harmed the senator's reputation. This is a direct harm caused by the AI's malfunction (hallucination). The harm falls under violations of rights, specifically defamation, which is a breach of legal protections. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm.
Thumbnail Image

هوش مصنوعی گوگل به دلیل توهم حذف شد

2025-11-03
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Jama) is explicitly involved as it generated false and defamatory claims about real individuals. This is a clear malfunction of the AI system producing hallucinated content that caused reputational harm and potential legal and human rights issues (defamation). The event involves the use and malfunction of the AI system leading to realized harm, meeting the criteria for an AI Incident. The removal of the model from the AI studio is a response to this incident, but the core event is the harm caused by the AI's false outputs.
Thumbnail Image

گوگل مدل هوش مصنوعی Gemma را به‌دلیل انتشار اطلاعات نادرست از AI Studio حذف کرد

2025-11-03
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The AI system Gemma generated and disseminated false and defamatory statements about real individuals, which is a clear violation of their rights and causes harm to their reputation. The event involves the use of an AI system that malfunctioned or produced erroneous outputs leading to harm. The removal of the model from public access is a response to this incident. Since the harm has materialized and is directly linked to the AI system's outputs, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

گوگل مدل هوش مصنوعی Gemma را به‌دلیل انتشار اطلاعات نادرست از AI Studio حذف کرد

2025-11-03
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The AI system Gemma generated false and defamatory statements about real individuals, which is a clear violation of their rights and causes harm to their reputation and communities. The misinformation was actively disseminated via the AI system, leading to realized harm. Google's removal of the model from public access is a response to this harm. The event involves the use of an AI system and the direct harm caused by its outputs, fitting the definition of an AI Incident rather than a hazard or complementary information. The harm is not potential but actual, and the AI system's role is pivotal in causing it.
Thumbnail Image

هوش مصنوعی گوگل به دلیل توهم حذف شد - ITMen

2025-11-04
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The AI system (Jama) generated false and defamatory content about real individuals, which constitutes harm to their reputation and a violation of their rights. This harm has already occurred due to the AI's outputs. The event involves the use and malfunction (hallucination) of the AI system leading directly to harm. Therefore, this qualifies as an AI Incident. The company's response and removal of the system from public studio is a mitigation step but does not change the classification of the original harm caused.
Thumbnail Image

بحران اعتماد در هوش مصنوعی

2025-11-05
ایمنا
Why's our monitor labelling this an incident or hazard?
The AI system Gemma generated harmful and defamatory content about Senator Marsha Blackburn, which constitutes harm to an individual and a violation of rights. The AI system's outputs directly led to reputational harm and political controversy. The removal of the model is a response to this harm. Since the harm has materialized and is directly linked to the AI system's use, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

هوش مصنوعی جمینای به اپلیکیشن گوگل مپس اضافه شد

2025-11-06
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini AI) integrated into Google Maps, which is explicitly described. However, the article does not report any realized harm or incidents caused by the AI system, nor does it highlight any plausible future harm or risks. It mainly provides information about new AI-powered features and improvements, which fits the definition of Complementary Information. There is no indication of injury, rights violations, infrastructure disruption, or other harms. Therefore, the classification is Complementary Information.
Thumbnail Image

Google shelves Gemma AI after it spat out a bogus claim about a senator

2025-11-04
Android Central
Why's our monitor labelling this an incident or hazard?
Gemma is an AI language model that produced fabricated, defamatory content about a public figure, which constitutes a violation of rights and reputational harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's malfunction (hallucination). Therefore, it qualifies as an AI Incident.
Thumbnail Image

Google's biased AI accused me of rape -- shut down its rampant lies

2025-11-06
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's large language model Gemma) that generated false and defamatory content, including fabricated criminal allegations against named individuals. This misinformation has caused reputational harm and violates rights related to protection from defamation and misinformation. The harm is realized and ongoing, not merely potential. The AI system's malfunction and biased outputs are central to the incident. Hence, this is an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Hallucination Or Hit Job? Sen. Blackburn Blasts Google's Defamatory AI Smear

2025-11-04
Dallas Express
Why's our monitor labelling this an incident or hazard?
The AI system (Gemma) explicitly generated fabricated and defamatory content about real individuals, which has caused reputational harm and legal challenges. The harm is direct and material, involving violations of rights (defamation). The event is not merely a potential risk or a general update but a concrete case of harm caused by the AI's outputs. Hence, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information.