AI Voice Replication Used in Fake Kidnapping Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers in Washington and Arizona are using AI voice replication technology to mimic children's voices, deceiving families into believing their children have been kidnapped and demanding ransoms. The FBI reports a rise in such scams, particularly targeting non-English speaking families. Highline Public Schools alerted parents about these incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (voice-replication model) was used maliciously to generate realistic audio of victims’ relatives, directly facilitating extortion and emotional trauma. The harm has occurred, making this an AI Incident.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketingEducation and training

Affected stakeholders
ChildrenGeneral public

Harm types
PsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Parents warned of disturbing kidnapping scheme using kids' voice replicas

2024-10-11
Fox News
Why's our monitor labelling this an incident or hazard?
An AI system (voice-replication model) was used maliciously to generate realistic audio of victims’ relatives, directly facilitating extortion and emotional trauma. The harm has occurred, making this an AI Incident.
Thumbnail Image

Parents warned of disturbing kidnapping scheme using kids' voice...

2024-10-11
New York Post
Why's our monitor labelling this an incident or hazard?
The event describes actual, malicious use of AI to generate realistic voice clones for a kidnapping scam that caused victims to send money. This constitutes a realized harm (financial loss, emotional trauma) directly enabled by an AI system’s outputs, meeting the criteria for an AI Incident.
Thumbnail Image

AI Voice Cloning Escalates Kidnapping Scams in the US

2024-10-13
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article describes multiple realized crimes in which AI-generated audio recordings of family members’ voices were used to extort money and terrorize victims. This is a malicious use of an AI system (voice-cloning) that directly led to harm (financial loss, psychological trauma), so it qualifies as an AI Incident.
Thumbnail Image

Law Enforcement Today

2024-10-14
Law Enforcement Today
Why's our monitor labelling this an incident or hazard?
Scammers employed advanced AI voice replication to deceive families into believing their children had been kidnapped and extort ransom payments. The AI system’s use directly enabled the harm (fraud, psychological trauma), making this an AI Incident.
Thumbnail Image

Alert for parents: Beware of alarming plan where children's voices are replicated for kidnapping. - Internewscast Journal

2024-10-11
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated voice replication was used by scammers to impersonate family members in kidnapping scams, leading victims to transfer money under false pretenses. This is a direct use of AI systems causing harm (financial loss, emotional trauma) to individuals and communities. The involvement of AI in the scam's execution and the realized harm meets the criteria for an AI Incident rather than a hazard or complementary information.