Google Uses AI to Remove 160 Million Fake Reviews and Block Fraudulent Apps

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google deployed generative AI to detect and remove 160 million fake reviews and block 266 million risky app installations on Google Play. The AI also restricted sensitive data access for over 255,000 apps, preventing fraud, review bombing, and reputational harm to developers across 185 markets.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved in the detection and removal of fake reviews and prevention of fraudulent app installations. The use of AI has directly led to harm mitigation by protecting developers' reputations and users from scams, which constitutes harm to communities and property (reputation and financial security). Since the AI system's use has directly prevented or mitigated harm, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
Industries
Digital security

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Google uklonio neverovatnih 160 miliona lažnih ocena!

2026-02-21
B92
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI in detecting fake reviews and malicious apps, which are AI systems involved in preventing harms such as reputational damage and fraud. No harm caused by AI malfunction or misuse is reported; instead, the AI is used to mitigate and prevent harm. The main focus is on Google's improvements and successes in AI-driven security and integrity measures, which is an update on the AI ecosystem and responses to AI-related challenges. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Google uklonio 160 miliona lažnih recenzija uz pomoć vještačke inteligencije

2026-02-22
Avaz.ba
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the detection and removal of fake reviews and prevention of fraudulent app installations. The use of AI has directly led to harm mitigation by protecting developers' reputations and users from scams, which constitutes harm to communities and property (reputation and financial security). Since the AI system's use has directly prevented or mitigated harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google uklonio nevjerovatnih 160 miliona lažnih ocjena

2026-02-21
Nezavisne novine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI in the review process to identify complex patterns of malicious software and fake reviews. The AI system's deployment has directly prevented harm such as reputation damage (harm to communities and developers) and potential security risks from fraudulent apps (harm to users and property). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm mitigation and protection against harms that would otherwise occur.
Thumbnail Image

Bezbjednost na internetu na prvom mjestu: Gugl uklonio 160 miliona LAŽNIH OCJENA

2026-02-22
Srpskainfo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the development and deployment of detection models that have directly prevented harms including fraud (banking scams via malicious apps), reputational harm to developers (through review bombing), and potential security risks to billions of devices. Since the AI system's use has directly led to the prevention of these harms, this qualifies as an AI Incident under the framework, as the AI system's involvement is pivotal in addressing and mitigating realized harms.
Thumbnail Image

Google uklonio nevjerovatnih 160 miliona lažnih ocjena

2026-02-21
Dnevne novine Dan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI in the review process to identify and remove 160 million fake reviews and block malicious apps, which directly prevents harm such as reputational damage and fraud. This constitutes an AI system's use leading to harm prevention, thus qualifying as an AI Incident. The harms prevented include reputational harm to developers and potential financial fraud, which fall under harm to communities and property. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Source.ba:Google uklonio 160 miliona lažnih recenzija uz pomoć vještačke inteligencije

2026-02-22
Source.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI solutions used to identify and remove fake reviews and prevent fraudulent app installations, which directly protects users and developers from harm. The reputational harm to developers and potential financial harm from bank fraud are real harms that have been mitigated by AI. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to preventing or mitigating harm to people and communities.
Thumbnail Image

Google uklonio 160 miliona lažnih recenzija uz pomoć vještačke inteligencije

2026-02-22
Portal 072info
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) is explicitly mentioned as being used to identify complex patterns of malicious software and fake reviews. The AI's use has directly led to the removal of harmful content and prevention of fraudulent app installations, which mitigates harm to users and developers. Since the AI system's use has directly prevented or reduced harm, this qualifies as an AI Incident under the definition of harm to communities and property. The event is not merely a general update or future risk but describes realized harm prevention through AI use.
Thumbnail Image

Google uklonio 160 miliona lažnih recenzija uz pomoć vještačke inteligencije

2026-02-22
Haber.ba
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the detection and removal of fake reviews and in blocking risky app installations, which directly prevents harm to communities (app users and developers) and protects privacy rights. The event reports realized harm prevention and mitigation through AI use, not just potential risks. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to preventing or mitigating harms related to misinformation, fraud, and privacy violations.
Thumbnail Image

Google Play je koristio AI kako bi blokirao 1,75 miliona loš

2026-02-24
Aktuelno
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the development and operation of security measures to block harmful apps and spam, which is a positive application aimed at preventing harm. There is no direct or indirect harm caused by the AI system reported; instead, the AI helps mitigate risks. Therefore, this is not an AI Incident or AI Hazard. The article provides contextual information about AI's role in security and regulatory challenges, which fits the definition of Complementary Information as it supports understanding of AI's impact and governance without reporting new harm or plausible future harm.
Thumbnail Image

Google je uz pomoć umjetne inteligencije 'očistio' Play Store, evo rezultata

2026-02-25
tportal.hr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models) in the development and operation of security measures for the Google Play Store. However, the article reports on the positive outcomes of these AI applications in preventing harm rather than any harm caused by AI. There is no indication of injury, rights violations, or other harms caused by the AI system itself. Instead, the AI is used to mitigate risks and improve security. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about AI's role in enhancing security and regulatory context.