Google AI Overviews Directs Users to Scam Phone Numbers, Leading to Financial Losses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's AI Overviews feature has repeatedly displayed fraudulent customer support numbers in search results, leading users to call scammers posing as legitimate representatives. Victims have reported financial losses and unauthorized charges after sharing sensitive information, highlighting the risks of unverified AI-generated content in critical search queries.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Google's AI Overviews) is explicitly involved as it generates summaries that include scam phone numbers. The use of these AI-generated summaries has directly led to harm (financial fraud and unauthorized charges), fulfilling the criteria for an AI Incident. The harm is to individuals (harm to persons) through deception and financial loss. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rights

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google's AI Overviews showing scam customer service numbers - Gizmochina

2025-08-18
Gizmochina
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates summaries that include scam phone numbers. The use of these AI-generated summaries has directly led to harm (financial fraud and unauthorized charges), fulfilling the criteria for an AI Incident. The harm is to individuals (harm to persons) through deception and financial loss. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm.
Thumbnail Image

Google's AI Search Might Recommend You Call a Scammer

2025-08-18
Lifehacker
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates search result summaries including phone numbers. The AI system's outputs have directly led users to call scam numbers, resulting in financial loss, which constitutes harm to persons. This harm is directly linked to the AI system's malfunction or flawed use of data. Therefore, this qualifies as an AI Incident under the definition of harm caused by AI system use.
Thumbnail Image

Think twice about using numbers supplied by Google's AI Overviews

2025-08-18
Android Authority
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate customer support contact information. The AI's incorrect output directly led the user to call a scam number, causing financial harm (unauthorized credit card charges). This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (financial harm).
Thumbnail Image

How scammers are using 'Google AI Overviews' to fool you

2025-08-19
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is directly involved in the use phase, where its outputs (summaries with phone numbers) have led users to contact scammers, resulting in financial harm and potential personal data theft. This constitutes harm to persons (financial loss and privacy breach) caused directly by the AI system's outputs. Therefore, this qualifies as an AI Incident under the definition of harm to persons caused directly or indirectly by the AI system's use.
Thumbnail Image

Google's AI could lead you into scam support numbers on Search

2025-08-18
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Google's AI system generating or summarizing customer support numbers, some of which are fake and controlled by scammers. This has directly led to financial harm to users who called these numbers and shared sensitive information. The harm includes financial loss and deception, which fits the definition of an AI Incident as the AI system's use has directly led to harm to persons (financial scams). The AI system's malfunction or misuse in generating or surfacing unreliable contact information is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google's AI overviews can send you straight to scammers. Here's how to stay safe | Mint

2025-08-19
mint
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overviews) is explicitly mentioned and is responsible for generating misleading contact information that directs users to scammers. This misuse or malfunction of the AI system has directly led to harm to individuals (financial harm and risk of fraud). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to people through facilitating scams.
Thumbnail Image

Google's AI overviews showing scam support numbers

2025-08-18
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overviews) is involved in the use phase by providing support numbers that are used by fraudsters to scam consumers. The harm (financial fraud and personal data compromise) has occurred as a direct consequence of users relying on AI-generated information. This fits the definition of an AI Incident because the AI system's outputs have directly or indirectly led to harm to people (fraud victims).
Thumbnail Image

Google's AI Overviews led users astray, reports say some phone numbers are scams

2025-08-18
Android Central
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generated outputs (phone numbers) which directly led users to scam calls, causing harm (financial and psychological) to individuals. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (a). The harm is realized, not just potential, and the AI's malfunction or misuse (providing unverified scam numbers) is a contributing factor. Therefore, this is classified as an AI Incident.
Thumbnail Image

Beware! Google AI in Search could connect you to scammers, here's how

2025-08-19
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-powered search results) whose outputs have directly led to harm (financial loss and exposure to scams) to users. The AI system's malfunction or misuse in generating inaccurate or fraudulent contact information has caused real, realized harm to people, fulfilling the criteria for an AI Incident. The harm is direct and material, involving financial loss and potential fraud, which aligns with harm to persons or groups. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google AI Overviews List Scam Numbers in Support Searches

2025-08-18
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Google's AI Overviews, powered by large language models, generated and displayed fraudulent customer support phone numbers that users called, resulting in financial losses and coercion. The AI system's use and malfunction (lack of verification) directly led to harm to people, fulfilling the criteria for an AI Incident. The harm is materialized and significant, involving financial scams and exploitation of vulnerable populations. Although Google is working on fixes and has taken some down, the incident as described involves actual harm caused by the AI system's outputs, not just potential or future harm. Thus, the event is best classified as an AI Incident.
Thumbnail Image

Google AI Overviews Directs Users to Scam Support Numbers, Sparking Losses

2025-08-18
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) whose use has directly led to harm (financial losses and privacy breaches) to users by providing scam phone numbers. The harm is realized and significant, involving fraud and deception facilitated by the AI's outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to people, fulfilling the criteria of injury or harm to persons and harm to communities through fraud.
Thumbnail Image

Google AI Overviews accused of showing scam numbers in search results - Phandroid

2025-08-19
Phandroid - Android News and Reviews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google AI Overviews) generating false information (fake scam phone numbers) that has directly led to harm, including attempted financial fraud. The AI system's outputs are misleading users, causing real-world harm through scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (fraud attempts and potential financial injury).
Thumbnail Image

Scammers are sneaking into Google's AI summaries to steal from you - how to spot them

2025-08-19
ZDNet
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Google's AI-powered search summaries and OpenAI's ChatGPT) whose outputs were manipulated to present fake phone numbers. This manipulation directly caused harm to users who were scammed financially, fulfilling the criteria for an AI Incident. The harm is realized (financial loss), and the AI system's malfunction or misuse (prompt injection leading to fraudulent outputs) is a direct contributing factor. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scammers Are Gaming Google's AI Overviews With Fake Support Numbers -- Here's How to Stay Safe

2025-08-19
Gizbot
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates summaries by aggregating web content. The scammers manipulate the data sources that the AI relies on, causing the AI to present fraudulent phone numbers as legitimate. This misuse of the AI system's outputs has directly led to financial harm to users, fulfilling the criteria for an AI Incident under harm to persons (a). The event involves the use of an AI system and the resulting harm is realized, not just potential, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Warning: Google AI Mode In Search May Lead You To Scammers

2025-08-19
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-powered search with AI Overviews and AI Mode) whose outputs have directly led users to contact scammers, resulting in financial harm (loss of money through credit card fraud). The AI system's aggregation and summarization of data from repeated but deceptive sources caused the harm indirectly by amplifying fake contact information. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons (financial harm through scams).
Thumbnail Image

Google AI Scam Uses Fake Numbers in Search Summaries

2025-08-20
TechNadu
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Google's AI-powered search summaries) whose outputs have directly led to financial harm to users by presenting fake customer service numbers. The AI system's malfunction or exploitation (via prompt injection) is a contributing factor to the harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial losses and deception).
Thumbnail Image

Scammers have infiltrated Google's AI responses - how to spot them

2025-08-21
ZDNet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of AI systems (Google's AI Overviews, AI Mode, and OpenAI's ChatGPT) in generating or summarizing search results that included fake phone numbers used by scammers. The harm (financial loss due to scams) has already occurred as a direct consequence of users trusting AI-generated contact information. The AI system's outputs were exploited or manipulated, leading to the dissemination of false information that caused harm. Therefore, this event meets the criteria for an AI Incident due to direct harm to people (financial harm) caused by the AI system's use and outputs.
Thumbnail Image

Mucho ojo con los números de atención al cliente de Google, ya que la IA está alimentando una nueva estafa

2025-08-20
3D Juegos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google AI Overviews) that indirectly caused harm by presenting false information that led to a financial scam. The AI system's malfunction or misuse in displaying unverified phone numbers was a contributing factor to the incident. The harm (financial loss and fraud) is realized, meeting the criteria for an AI Incident. The article also mentions responses by Google and OpenAI, but the primary focus is on the incident itself.
Thumbnail Image

Cuidado con las estafas potenciadas por IA: lo que aprendió un viajero al buscar ayuda en Google

2025-08-18
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The article details how an AI-powered feature in Google search provided false but plausible contact information, which was then used by scammers to defraud a user of nearly $800. The AI system's role in aggregating and presenting unverified data directly contributed to the harm. This fits the definition of an AI Incident because the AI system's use led to a violation of property rights (financial loss) and harm to the individual. The harm is realized, not just potential, and the AI system's involvement is clear and pivotal in enabling the scam.
Thumbnail Image

"Soy bastante avanzado en tecnología y caí en esto": los resúmenes IA de Google ya empiezan a incluir estafas

2025-08-20
Xataka
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's generative AI summaries) that provided misleading information (a fake phone number) which was relied upon by the user, resulting in financial harm (unauthorized charges). This constitutes direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under harm to persons (financial harm) and communities (scam impact). The AI system's malfunction or misuse led to realized harm, not just a potential risk, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cuidado con lo que le dices a la IA de Google. Este hombre pidió un número de teléfono y le acabaron robando

2025-08-20
Xataka Móvil
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Google's AI Overviews and ChatGPT) that generated or presented fraudulent contact information manipulated by scammers. This led directly to a financial loss for the user, constituting harm to a person. The AI systems' outputs were exploited to deceive the user, making the AI's involvement a direct contributing factor to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm.
Thumbnail Image

Cuidado con los resúmenes de IA de Google: te pueden recomendar un teléfono de atención al cliente falso y robar todos tus datos

2025-08-18
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Google's AI-generated summaries and ChatGPT) that produced misleading information, which directly led to a user being scammed and financially harmed. The AI system's outputs were a contributing factor in the harm, as the user relied on the AI-generated phone number that was fraudulent. This meets the criteria for an AI Incident because the AI system's use indirectly led to injury or harm to a person (financial harm through fraud). The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content.
Thumbnail Image

Estafas en los resúmenes de IA de Google: qué son y cómo protegerte de ellas

2025-08-21
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's AI-powered search summaries) whose outputs have directly led to harm in the form of scams and potential financial or security damage to users. The AI system's use is central to the incident, as it generates the fraudulent summaries that users rely on, which scammers exploit. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm (scams and misinformation causing harm to individuals).
Thumbnail Image

Estafas con IA de Google: cómo la inteligencia artificial facilita nuevos fraudes

2025-08-21
Iprofesional.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used in a way that directly led to harm: a user was scammed out of money after relying on AI-provided contact information that was fraudulent. The AI's method of aggregating unverified data from unreliable sources caused the dissemination of false information, which was exploited by criminals. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial loss and fraud). The article does not merely warn about potential harm but reports an actual incident with realized harm.
Thumbnail Image

Los resúmenes de IA de Google también han empezado a difundir estafas. Y hasta los usuarios avanzados están cayendo en ellas

2025-08-21
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI-generated summaries) whose outputs directly led to financial harm (fraudulent charges) to a user. The AI system's malfunction or misuse (lack of verification of data) caused the dissemination of false contact information, which was exploited by scammers. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial loss) and harm to communities (widespread fraud).