AI Voice Cloning Scams Target Elderly, Leading to Financial Losses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used AI voice-cloning technology to impersonate loved ones, convincing elderly victims like Ruth Card to urgently withdraw and send money for fake emergencies. The realistic synthetic voices led to significant financial losses and emotional distress, highlighting the growing threat of AI-enabled impersonation fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI voice-generating software used by scammers to mimic voices of loved ones, which directly led to impersonation scams causing financial harm and emotional distress to victims. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (category a).[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityDemocracy & human autonomy

Industries
Digital securityFinancial and insurance services

Affected stakeholders
General public

Harm types
Economic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

AI is helping scammers mimic voices of people's loved ones

2023-03-06
Raw Story
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice-generating software used by scammers to mimic voices of loved ones, which directly led to impersonation scams causing financial harm and emotional distress to victims. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (category a).
Thumbnail Image

Scammers are using AI voice generators to sound like your loved ones. Here's what to watch for

2023-03-08
ZDNet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice generators to impersonate family members in phone scams, which have directly led to financial losses for victims. The AI system's use is central to the harm, as it enables scammers to convincingly mimic voices and deceive people. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (financial and emotional harm).
Thumbnail Image

Scammers are using voice-generating AI to trick people out of money

2023-03-06
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice-generating systems that can replicate a person's voice from a short audio clip, which scammers use to deceive victims into transferring money. This constitutes direct harm to people (financial loss) caused by the use of AI systems. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to individuals through fraudulent scams.
Thumbnail Image

A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son

2023-03-06
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-generated voice to impersonate the son of the victims, which directly led to a scam resulting in a $21,000 loss. This is a clear case where the AI system's use caused harm to people, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI system's malicious use in the scam.
Thumbnail Image

Scammers stole thousands from old couple by 'making AI clone of their grandson'

2023-03-06
Daily Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used to create convincing voice replicas of victims' relatives, which scammers then used to deceive elderly people into transferring money. This is a direct use of AI systems leading to realized harm (financial loss) to individuals, fitting the definition of an AI Incident. The harm is direct and material, involving deception facilitated by AI-generated voice clones.
Thumbnail Image

Billions of iPhone and Android owners warned over cursed 'AI call'

2023-03-06
The Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone human voices using AI voice-generating software and text-to-speech tools. The misuse of these AI systems by scammers has directly led to financial harm (loss of money) to people, which qualifies as harm to individuals. Therefore, this constitutes an AI Incident due to the realized harm caused by the malicious use of AI voice cloning technology.
Thumbnail Image

Scammers are using voice-cloning A.I. tools to sound like victims' relatives in desperate need of financial help. It's working.

2023-03-05
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI voice cloning technology to create convincing fake voices of victims' relatives, which scammers then use to trick victims into sending money. The harm (financial loss) is realized and directly linked to the AI system's use. This fits the definition of an AI Incident as the AI system's use has directly led to harm to people. The article also notes the response by a company restricting free access to voice cloning tools to mitigate misuse, but the primary event is the realized harm from AI misuse.
Thumbnail Image

A couple in Canada were reportedly scammed out of $21,000 after getting a call from an AI-generated voice pretending to be their son

2023-03-06
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-generated voice to impersonate a family member, which directly led to a scam resulting in a $21,000 loss. The AI system's involvement is clear and central to the harm caused. The harm is realized and significant, involving financial loss due to fraudulent use of AI-generated synthetic speech. Therefore, this qualifies as an AI Incident under the definition of harm caused by AI system use.
Thumbnail Image

They thought loved ones were calling for help - it was an AI scam

2023-03-05
Stuff
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to impersonate victims' relatives, leading to financial scams and emotional harm. The AI system was used maliciously to generate convincing fake voices, which directly caused harm to victims who sent money under false pretenses. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons (financial and emotional). The involvement of AI is clear and central to the harm described, and the harm is realized, not just potential.
Thumbnail Image

Rising scams use AI to mimic voices of loved ones in financial distress

2023-03-06
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice-generating systems being used to mimic voices convincingly, facilitating scams that have caused real financial losses and emotional harm to victims, particularly the elderly. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial and emotional harm). The discussion of regulatory guidance and company responses constitutes complementary information but does not overshadow the primary incident of harm caused by AI misuse. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

They thought loved ones were calling for help. It was an AI scam. - The Boston Globe

2023-03-05
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning AI) to generate synthetic voices that impersonate individuals, leading to direct financial harm to victims through scams. The AI system's use is central to the harm, as it enables scammers to convincingly mimic voices and deceive victims. The harm is realized (financial loss), and the AI system's role is pivotal in causing this harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

How scammers are using AI voice cloning to trick victims into sending money | Boing Boing

2023-03-06
Boing Boing
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning technology to create a fake conversation that deceived the parents into sending money constitutes direct involvement of an AI system in causing harm. The harm here is financial loss to individuals due to malicious use of AI-generated synthetic voice, which fits the definition of an AI Incident as it directly led to harm to persons (financial harm).
Thumbnail Image

Scammers Using Voice Cloning AI to Trick Grandma Into Thinking Grandkid Is in Jail

2023-03-06
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning systems to impersonate family members' voices, which directly led to financial harm to victims through scams. The AI system's use was central to the deception and harm, fulfilling the criteria for an AI Incident. The harm is realized (money lost), and the AI system's role is pivotal in enabling the scam. Hence, this is classified as an AI Incident.
Thumbnail Image

AI Tools Used by Scammers to Pretend to be Victim's Relatives

2023-03-06
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create convincing fake voices that scammers used to deceive elderly parents into sending money. This is a direct harm to individuals (financial loss) caused by the malicious use of an AI system. The involvement of AI in the scam is clear and central to the incident, fulfilling the criteria for an AI Incident due to harm to persons through fraudulent activity enabled by AI.
Thumbnail Image

Scammers are using AI voice cloning to imitate your loved ones over the phone

2023-03-07
TweakTown
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered voice cloning tools to create convincing fake voices of loved ones, which scammers use to manipulate victims into transferring money. This involves the use of an AI system (voice cloning AI) in the malicious use phase, directly leading to financial harm to victims, which qualifies as harm to persons (a). Therefore, this is an AI Incident due to realized harm caused by the malicious use of an AI system.
Thumbnail Image

Voice-Clone AI Scams -- it's NOT ME on the Phone, Grandma

2023-03-06
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning systems (e.g., ElevenLabs) being used by scammers to impersonate loved ones and trick victims into sending money. This constitutes direct harm to people through financial scams, fulfilling the criteria for an AI Incident. The AI system's use is central to the harm, as it enables more effective impersonation and deception. Although some skepticism about the extent of AI involvement is noted, the overall narrative confirms real victims and harm caused by AI-enabled scams.
Thumbnail Image

They thought loved ones were calling for help. It was an AI scam.

2023-03-06
UnionLeader.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice synthesis technology to clone voices of victims' relatives, which was then used to deceive and defraud them of money. The AI system's involvement is central to the scam's success and the resulting financial and emotional harm. The harm is realized and direct, as victims lost money and suffered distress. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons and communities through fraud and impersonation.
Thumbnail Image

Bad Actors Are Using Voice Impersonation To Dupe Victims - DailyAlts -

2023-03-06
DailyAlts
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice-generating software that analyzes and replicates a person's voice to create synthetic speech used by criminals to scam victims. The harm is realized as victims have been financially defrauded, which constitutes harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through fraudulent impersonation and financial loss.
Thumbnail Image

How AI voice cloning perfected phone scams

2023-03-07
Bullfrag
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as voice cloning technologies that replicate human voices to deceive victims in phone scams. The scam led to actual financial harm to an elderly victim, fulfilling the harm criteria (a) injury or harm to persons, here financial and emotional harm. The AI system's use was central to the scam's success, making it a direct cause of harm. The article also references the broader trend and risks of such AI misuse, confirming the AI system's pivotal role in the incident.
Thumbnail Image

We frantically withdrew £1,800 in cash after our 'grandson's' pleas for help - but there was a chilling AI twist - Internewscast

2023-03-07
Internewscast
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to clone and fake the grandson's voice, which directly caused the couple to lose money. This constitutes an AI Incident because the AI system's use led directly to harm (financial loss) through deception. The involvement of AI in the scam is clear and the harm has materialized, fulfilling the criteria for an AI Incident.
Thumbnail Image

Alertan de una estafa efectuada con inteligencia artificial: clonan la voz de tus padres

2023-03-10
as
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Microsoft Vall-e) that clones voices from short audio clips. This AI system was used in the scam to impersonate family members, causing victims to lose money. The harm is direct financial loss to individuals, which qualifies as harm to persons or communities. Therefore, this is an AI Incident because the AI system's use directly led to realized harm through fraudulent activity.
Thumbnail Image

Así están usando inteligencia artificial para hacer estafas telefónicas

2023-03-08
infobae
Why's our monitor labelling this an incident or hazard?
The article clearly states that AI voice synthesis tools are being used by criminals to impersonate relatives and scam victims out of money. This involves the use of an AI system (voice cloning) in the use phase, directly causing financial harm to individuals. The harm is realized and significant, with reported losses of millions of dollars. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in fraudulent activity.
Thumbnail Image

Pensaron que sus seres queridos les pedían ayuda: era una estafa con inteligencia artificial

2023-03-08
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI voice cloning technology was used by scammers to impersonate relatives and convince victims to send money, resulting in actual financial losses. The AI system's use is central to the harm caused, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), involving deception leading to monetary loss, which is a significant harm to people. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Clonar la voz de tu familia: así usan los hackers la inteligencia artificial para robar tus ahorros

2023-03-11
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI voice cloning technology to impersonate family members in phone calls, resulting in victims transferring money to scammers. The AI system's role is pivotal in generating convincing fake voices that deceive victims, causing direct financial harm and emotional distress. The harm is realized, not just potential, with documented cases of losses. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities through fraud and deception.
Thumbnail Image

Clonar la voz de alguien para pedirle a sus abuelos dinero por teléfono, el nuevo timo que la inteligencia artificial hace posible

2023-03-08
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning (deepfake audio) to perpetrate scams that have caused direct financial harm to elderly victims. The AI system's role is pivotal in enabling the impersonation of relatives' voices, which directly led to the harm (financial loss). The article provides concrete examples of such scams occurring, not just potential risks, fulfilling the criteria for an AI Incident.
Thumbnail Image

La inteligencia artificial da para todo: cómo los delincuentes la utilizan para estafar a la gente

2023-03-09
https://www.iproup.com/economia-digital/595-emprendedor-startup-tecnologia-Mercado-Libre-va-de-compras-a-la-provincia-de-Santa-Fe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning tools (e.g., ElevenLabs) to create fraudulent phone calls that have caused millions of dollars in losses. The AI system's use is central to the scam method, directly causing harm to individuals through deception and theft. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm (financial loss and privacy breaches).
Thumbnail Image

Estafas telefónicas con Inteligencia Artificial: Voces idénticas

2023-03-07
Urgente 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology used to create synthetic voices nearly identical to real people, which scammers use to deceive victims into transferring money. The harm is actual and ongoing, with reported financial losses and emotional impact on victims. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The involvement of AI in the scam is central to the event, not speculative or potential, and the harm is clearly articulated.
Thumbnail Image

Así están usando inteligencia artificial para hacer estafas telefónicas

2023-03-10
Venezuela al dia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) in the commission of telephone scams that have directly led to financial harm to victims. The AI system's use is malicious and instrumental in causing the harm. Therefore, this qualifies as an AI Incident under the definition, as the AI system's use has directly led to harm to persons (financial loss) and communities (harm from fraud).
Thumbnail Image

FTC Warns: Scammers Are Using Voice Clones to Steal Your Money

2023-03-23
The Motley Fool
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI to clone voices, which is an AI system. The scammers' use of this AI system directly leads to financial harm to victims through fraudulent calls. This constitutes harm to persons (financial harm) caused directly by the use of an AI system, fitting the definition of an AI Incident.
Thumbnail Image

That panicky call from a relative? It could be a thief using a voice clone, FTC warns

2023-03-22
NPR
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create realistic impersonations of family members or authority figures, which scammers use to trick victims into sending money. This constitutes direct harm to people (financial loss), fulfilling the criteria for an AI Incident. The AI system's use is central to the harm occurring, as the scam relies on the AI-generated voice to deceive victims. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Panicked call from a relative could be scammers cloning their voice, FTC warns

2023-03-23
FOX 35 Orlando
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning to impersonate family members in scam calls, leading to financial harm to victims. This involves the use of an AI system (voice cloning) in a malicious way that directly causes harm (financial loss) to individuals. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Is Your Kid Really in Trouble? Beware Family Emergency Voice-Cloning Scams

2023-03-20
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice-cloning systems to impersonate family members in phone scams, which has directly led to financial harm to victims. The AI system's use is central to the scam's effectiveness, causing harm to individuals (financial loss) and communities (trust erosion). The article provides concrete examples of such scams occurring and resulting in harm, fulfilling the criteria for an AI Incident. The involvement of AI is explicit and the harm is realized, not just potential.
Thumbnail Image

Scammers are using AI voice cloning tools to dupe victims

2023-03-21
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as voice cloning tools used by scammers to impersonate individuals and commit fraud. The harm is direct financial loss to victims, which is a significant harm to persons. The article details how the AI system's use leads to this harm, fulfilling the criteria for an AI Incident. The involvement is through malicious use of AI-generated voice impersonation causing realized harm.
Thumbnail Image

Is Your Kid Really in Trouble? Beware Family Emergency Voice-Cloning Scams

2023-03-20
PCMag UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered voice-cloning software to impersonate family members, which is an AI system. The misuse of this AI system has directly led to financial harm to victims, fulfilling the criteria for an AI Incident under harm to persons (financial harm) and communities (victims of scams). The event involves the use of AI systems leading to realized harm, not just potential harm, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cyber criminals are using AI voice cloning tools to dupe victims

2023-03-21
CBS News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning tools being used by criminals to impersonate victims and commit scams, resulting in financial harm to individuals. The AI system's use is central to the harm, as the convincing voice clones enable the scam. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss and deception). The article also discusses the development and misuse of these AI tools, confirming the AI system's involvement in causing harm.
Thumbnail Image

Cybercriminals are using AI voice cloning tools to dupe victims

2023-03-22
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning systems (e.g., ElevenLabs, Vall-E) to commit fraud by impersonating individuals' voices to scam money from their relatives. This is a direct use of AI systems causing harm to people through deception and financial loss, which qualifies as harm to communities and individuals. The article details actual scams occurring, not just potential misuse, thus it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US government warns billions of Android and iPhone users over bank-emptying AI

2023-03-20
The US Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning programs to mimic voices of family members, which is an AI system. The scam causes direct harm to people by tricking them into sending money, fulfilling the harm criteria (a) injury or harm to persons, here financial harm. The FTC warning indicates that this harm is occurring or has occurred, making this an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

The FTC Warns of a 'Terrifying' Phone Scam Driven By AI

2023-03-23
Newser
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (voice cloning programs) to commit fraud, leading to direct financial harm to victims. The scam's reliance on AI-generated voice clones to deceive people and cause monetary loss fits the definition of an AI Incident, as the AI system's use has directly led to harm to persons. The article reports realized harm (millions lost) rather than just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Scammers now using AI to improve their family emergency schemes

2023-03-21
ConsumerAffairs
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) to generate realistic imitations of a person's voice, which scammers use to perpetrate fraud. This use of AI directly leads to harm, specifically financial harm to victims of these scams. The article details how the AI system's outputs are exploited maliciously, causing real harm to people. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals (financial loss), fitting the definition of harm to persons or groups of people.
Thumbnail Image

JERRY DAVICH: New twist on old phone scam uses artificial intelligence: 'Don't trust the voice'

2023-03-24
nwi.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) in the malicious use phase to impersonate a family member's voice to deceive and attempt financial fraud. This constitutes a direct link between AI use and an attempted harm (fraud), which is a violation of rights and causes harm to the individual targeted. Since the scam attempt occurred and was reported, this qualifies as an AI Incident under the definition of harm to persons through fraudulent activity enabled by AI misuse.
Thumbnail Image

Scammers use AI to enhance their family emergency schemes

2023-03-20
Consumer Advice
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning) in the scammer's scheme, which directly leads to harm (financial loss and emotional distress) to victims. The AI system's use is malicious and causes realized harm, meeting the criteria for an AI Incident. The article describes actual harm occurring due to AI misuse, not just potential harm or general information, so it is not an AI Hazard or Complementary Information.