Google Uses Gemini AI to Block Billions of Malicious Ads

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google deployed its Gemini AI system to block approximately 8.2 billion online ads in 2023 that violated company policies, including those generated by malicious actors using generative AI. The system intercepted over 99% of harmful ads before reaching users, significantly reducing exposure to deceptive and dangerous content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system (Gemini) in detecting and blocking malicious ads, including those generated by AI for deceptive purposes. The AI system's role is pivotal in preventing harm by intercepting harmful content before it reaches users, thus directly addressing harm to communities and individuals. Since the AI system's use has directly led to harm prevention and involves real-world impacts, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
Industries
Media, social platforms, and marketingDigital security

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

La IA ayudó a Google a bloquear millones de "anuncios maliciosos"

2026-04-16
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) in detecting and blocking malicious ads, including those generated by AI for deceptive purposes. The AI system's role is pivotal in preventing harm by intercepting harmful content before it reaches users, thus directly addressing harm to communities and individuals. Since the AI system's use has directly led to harm prevention and involves real-world impacts, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La IA ayudó a Google el 99% de los "anuncios maliciosos"

2026-04-17
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gemini) used to detect and block malicious ads, including those involving deepfakes and deceptive content. The AI system's use has directly led to the prevention of harm by stopping over 99% of policy-violating ads before they reach users, which is a form of harm mitigation to communities and individuals. Although the AI system is used to prevent harm rather than cause it, the framework includes harms related to AI system use, including misuse or failure to comply with legal frameworks. Here, the AI system is functioning as intended to prevent harm, but the event still involves the AI system's use in a context of harm (malicious ads). This fits best as an AI Incident because it involves direct AI system use linked to harm prevention and the management of harmful content, which is a significant societal impact involving AI.
Thumbnail Image

Google bloqueó millones de anuncios maliciosos con ayuda de la IA | El Nuevo Siglo

2026-04-16
EL NUEVO SIGLO
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini) is explicitly mentioned as being used to analyze advertising data and block malicious ads, including those created with generative AI. The system's use directly prevents harm by stopping the spread of misleading and potentially harmful advertisements, which can cause harm to communities and individuals. Since the AI system's use is actively mitigating harm rather than causing it, and the article focuses on the deployment and effectiveness of this AI system in preventing harm, this qualifies as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Google bloquea millones de anuncios maliciosos con IA

2026-04-17
24 Horas
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) in the development and use phases to detect and block malicious ads. This use directly prevents harm to users by stopping exposure to harmful content, which can be considered harm to communities and individuals. Since the AI system's role is pivotal in preventing these harms and the harms are realized or actively prevented, this qualifies as an AI Incident rather than a hazard or complementary information. The event involves direct use of AI leading to harm prevention, which fits the definition of an AI Incident.
Thumbnail Image

Bloquea Google ocho mil millones de anuncios con ayuda de "Gemini"

2026-04-17
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) in the development and use phases to detect and block harmful ads, including those generated by malicious actors using generative AI. The AI system's role is pivotal in preventing the dissemination of harmful content, which if allowed, would cause harm to communities and individuals. Since the harm is actively being prevented but the system is directly involved in mitigating significant harm, this qualifies as an AI Incident due to the direct link between AI use and harm prevention in a context where harm is averted but clearly present as a risk.