Google AI Push Notification Includes Racial Slur, Prompts Apology

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google issued an AI-generated push notification referencing a BAFTA Film Awards incident, but the alert included a racial slur. The notification caused public outrage and harm by spreading offensive language. Google apologized, removed the alert, and pledged to improve safeguards to prevent similar AI-generated content errors.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was involved in generating the news alert that included offensive content, which directly caused harm to communities by spreading harmful language and causing public outrage. This fits the definition of an AI Incident because the AI system's use directly led to harm (harm to communities through offensive content dissemination). The event is not merely a potential hazard or complementary information but a realized harm caused by AI system malfunction or misuse.[AI generated]
AI principles
FairnessSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google Apologizes After AI News Alert About BAFTA Film Awards Debacle Included The N-Word

2026-02-24
Deadline
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating the news alert that included offensive content, which directly caused harm to communities by spreading harmful language and causing public outrage. This fits the definition of an AI Incident because the AI system's use directly led to harm (harm to communities through offensive content dissemination). The event is not merely a potential hazard or complementary information but a realized harm caused by AI system malfunction or misuse.
Thumbnail Image

Google Apologizes After AI-Generated News Alert Included The N-Word

2026-02-24
Forbes
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate and send a news alert that included the N-word, a racial slur, which constitutes harm to communities through dissemination of offensive content. The harm has already occurred as the alert was sent to subscribers. Therefore, this qualifies as an AI Incident due to the AI system's role in producing and distributing harmful content.
Thumbnail Image

Google Apologizes After News Alert Spells Out N-Word

2026-02-24
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the offensive notification was not AI-generated but resulted from a system error where safety filters failed to trigger properly. Since AI involvement is denied and the harm (offensive notification) was caused by a non-AI technical error, this does not qualify as an AI Incident. There is also no plausible future harm from AI systems indicated, so it is not an AI Hazard. The event provides context on the limitations of automated content filtering systems and the company's response, fitting the definition of Complementary Information.
Thumbnail Image

Google Apologizes for Sending Out AI-Generated Push Notification That Used the N-Word

2026-02-24
Mediaite
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the push notification content. The AI-generated message contained a racial slur, which is a violation of human rights and causes harm to communities by spreading offensive and harmful language. Google acknowledged the mistake and apologized, indicating the AI system's malfunction or failure in content moderation led directly to the harm. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's output.
Thumbnail Image

Google sent an AI-generated push alert that included a racial slur

2026-02-24
engadget
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the push alert that contained a racial slur, which is a violation of rights and causes harm to communities by spreading offensive and harmful content. The harm has already occurred as the notification was sent and caused public outrage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's output.
Thumbnail Image

Google apologises for news alert about BAFTAs racial slur - Mediaweek

2026-02-25
Mediaweek
Why's our monitor labelling this an incident or hazard?
The key point is that Google explicitly stated the offensive notification was not AI-generated but caused by a system error unrelated to AI. The alert was computer-generated but not by an AI system as defined. The racial slur incident at the BAFTAs involved human and broadcasting errors, not AI malfunction or misuse. The apology and resignation relate to the broader scandal, not to AI system harm. Thus, no AI system caused or contributed to harm here, nor is there plausible future harm from AI. The article mainly provides an update and context on the incident and Google's response, fitting the definition of Complementary Information.
Thumbnail Image

Google Apologises for BAFTAs Alert to 'See More' on Racial Slur

2026-02-26
American Renaissance
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred here as Google's content system that automatically generates push notifications based on content analysis. The failure of safety features led to the direct dissemination of offensive content, causing harm to communities through exposure to racial slurs. This constitutes a violation of rights and harm to communities. Since the harm has occurred due to the AI system's malfunction in content characterization and notification, this qualifies as an AI Incident.
Thumbnail Image

Google "Deeply Sorry" After Notification For BAFTA Uses N-Word

2026-02-25
NDTV
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system's malfunction in content moderation and notification generation, which led to the harm of disseminating offensive and harmful language to users. The AI system's failure to filter out the racial slur caused a violation of community standards and potentially harmed affected groups, meeting the criteria for an AI Incident under violations of rights and harm to communities. The harm has already occurred, and the AI system's role is pivotal in causing it.
Thumbnail Image

Google apologises after AI news alert includes racial slur

2026-02-25
MoneyControl
Why's our monitor labelling this an incident or hazard?
The AI system was used in generating the news alert, and its output included a racial slur, which is a violation of human rights and causes harm to communities. The harm has already occurred as the offensive notification was sent to users and widely circulated, leading to public criticism and distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's output.
Thumbnail Image

Google Apologizes for Sending the Worst Push Notification You Can Possibly Imagine

2026-02-25
Futurism
Why's our monitor labelling this an incident or hazard?
The incident involves an automated system distributing news notifications that included a racial slur, which caused harm to communities by spreading offensive content. Despite Google's claim that AI was not involved, the system is automated and performed a task indicative of AI-like content processing (recognizing euphemisms and generating notification text). The harm (offensive content causing social harm) is realized, meeting the criteria for an AI Incident. The event is not merely a product launch or general news, nor is it a potential future harm, but a concrete harm caused by an automated AI-related system.
Thumbnail Image

Google Apologises For Racial N-Word News Alert In BAFTA Controversy Coverage

2026-02-25
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event describes a malfunction in an automated news alert system that led to the dissemination of a racial slur, causing harm to communities and violating rights related to dignity and non-discrimination. Although Google claims the error did not involve AI, the system's automated filtering and notification process is indicative of AI or AI-like automated content processing. The harm is realized and direct, as offensive content was sent to users and caused public harm and backlash. This fits the definition of an AI Incident because the development or malfunction of an automated system led directly to harm. The incident is not merely a potential hazard or complementary information, but a concrete harm event.
Thumbnail Image

Google apologises after news alert displays racial slur due to safety failure

2026-02-25
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event describes a failure in Google's AI-powered content safety filters that allowed a racial slur to be displayed in a news alert, causing harm through offensive content dissemination. The AI system's malfunction directly led to this harm, fulfilling the criteria for an AI Incident. Although the harm is non-physical, it affects communities and violates norms against hate speech, which is within the scope of AI Incident harms. The event is not merely a product update or general news, nor is it a potential future harm; the harm has already occurred. Hence, the classification is AI Incident.