AI Voice Cloning Used in Fake Kidnapping Scam Causes Panic in Georgia Family

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used AI-generated voice cloning to impersonate a Georgia woman's daughter, convincing her that her child had been kidnapped and demanding $50,000 ransom. The realistic AI voice caused severe emotional distress and near heart attack, highlighting the dangers of AI-enabled impersonation scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to clone a person's voice from a short audio snippet, which was then used to deceive the victim into believing her daughter was kidnapped. This is a clear example of an AI system's use leading directly to harm (emotional distress and potential financial loss), fitting the definition of an AI Incident. The AI system's role is pivotal in enabling the scam's convincing impersonation, which caused the harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalPhysical (injury)Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Mother tells of panic after scammers use AI to impersonate daughter

2023-07-20
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone a person's voice from a short audio snippet, which was then used to deceive the victim into believing her daughter was kidnapped. This is a clear example of an AI system's use leading directly to harm (emotional distress and potential financial loss), fitting the definition of an AI Incident. The AI system's role is pivotal in enabling the scam's convincing impersonation, which caused the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Georgia mother 'almost' suffers 'heart attack' after ransom caller demands $50,000 for daughter

2023-07-19
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scam involved AI-generated cloned voices to impersonate the daughter, which directly caused emotional harm to the mother. The AI system's use in the scam is a direct factor in the harm experienced. The harm is psychological distress, which falls under injury or harm to a person. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Scammers trick mom using AI tech to think daughter was kidnapped,...

2023-07-20
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) to impersonate a person's voice, which directly led to harm: emotional distress to the mother and an attempted ransom demand, which constitutes a clear harm to the individual and potential financial harm. The AI system's use was malicious and directly caused the incident. Therefore, this qualifies as an AI Incident under the definition of harm to a person or group of people caused directly or indirectly by the use of an AI system.
Thumbnail Image

Mum's chilling warning to parents over new AI ransom scam as girl impersonated

2023-07-20
Mirror
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI to generate a convincing voice clone of the victim's daughter, which directly caused emotional harm and an attempted ransom demand. The AI system's role was pivotal in enabling the impersonation and the scam. The harm is realized and not just potential, meeting the criteria for an AI Incident under harm to persons and communities.
Thumbnail Image

Mum's 'sheer panic' after scammers faked daughter's kidnapping

2023-07-21
honey.nine.com.au
Why's our monitor labelling this an incident or hazard?
The scammers used AI voice cloning technology to generate a convincing fake voice of the daughter, which directly caused emotional harm to the mother and an attempted financial scam. This fits the definition of an AI Incident because the AI system's use directly led to harm (psychological distress and attempted extortion). The event involves the malicious use of an AI system (voice cloning) to perpetrate harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'It was sheer panic' | Cherokee County mother receives call from scammer impersonating her daughter, asking for $50K

2023-07-18
WXIA-TV 11
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI to generate a voice clone that impersonated the victim's daughter, directly leading to emotional harm and an attempted financial scam. The AI system's use here is malicious and caused realized harm (panic, emotional distress, and attempted fraud). Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-enabled impersonation scam.
Thumbnail Image

Debbie Moore: Georgia Mother Nearly Suffers a Heart Attack After AI-generated Phone Call Stages Daughter's Kidnap

2023-07-20
Dekh News
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves an AI system used to clone a human voice (an AI system) to perpetrate a scam that caused direct harm to a person (psychological distress and near heart attack). The AI system's use was malicious and directly led to harm, fitting the definition of an AI Incident under harm to health (a).
Thumbnail Image

Georgia Mother Almost Loses $50k After Scammers Use AI to Mimic Daughter's Voice in Fake Kidnapping Plot [Video]

2023-07-21
Baller Alert
Why's our monitor labelling this an incident or hazard?
The scammers used AI-generated voice synthesis to impersonate the daughter, which directly led to an attempted financial fraud and emotional harm to the mother. Although the scam was ultimately unsuccessful, the AI system's use was pivotal in enabling the deception and the plausible risk of significant financial loss and emotional distress. This fits the definition of an AI Incident as the AI system's use directly led to harm or attempted harm.
Thumbnail Image

Artificial intelligence phone call scam scares family

2023-07-18
https://www.atlantanewsfirst.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scam involved AI-generated voice technology to impersonate a loved one, which directly caused harm by inducing fear and attempting to extort money. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological distress) and potential financial harm. The scam is a realized harm, not just a potential risk, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the AI system's malicious use caused direct harm.