AI Voice Cloning Scam Defrauds Florida Woman of $15,000

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used AI to clone a Florida woman's daughter's voice, convincing her that her daughter was in legal trouble and needed $15,000 for bail. The realistic AI-generated voice led the victim to transfer the money, resulting in significant financial and emotional harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The scammer used AI to generate a near-perfect replica of the daughter's voice, which directly led to the victim being defrauded of $15,000. The AI system's use in cloning the voice was pivotal in causing the harm, fulfilling the criteria for an AI Incident due to direct financial and emotional harm to the victim.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Florida woman scammed out of $15K by AI cloning her daughter's voice:...

2025-07-17
New York Post
Why's our monitor labelling this an incident or hazard?
The scammer used AI to generate a near-perfect replica of the daughter's voice, which directly led to the victim being defrauded of $15,000. The AI system's use in cloning the voice was pivotal in causing the harm, fulfilling the criteria for an AI Incident due to direct financial and emotional harm to the victim.
Thumbnail Image

Woman conned out of $15K after AI clones daughter's voice

2025-07-17
The Independent
Why's our monitor labelling this an incident or hazard?
The scammers used AI to replicate the daughter's voice, which is an AI system performing voice cloning. This AI-enabled deception directly caused financial harm (loss of $15,000) and emotional distress, fulfilling the criteria for an AI Incident under harm to persons and communities. The AI system's use was central to the scam's success, making this a direct AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Scam Tricks Woman Out of $15K; Tech Expert Shares Insights - Internewscast Journal

2025-07-16
internewscast.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the scam was likely powered by AI capable of mimicking voices, which was used to trick the victim into handing over $15,000. This is a direct harm to the victim's property (financial loss) caused by the malicious use of an AI system. Therefore, it qualifies as an AI Incident under the definition of harm to property caused by AI misuse.
Thumbnail Image

AI scheme cons woman out of $15K, tech consultant weighs in

2025-07-16
WFLA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the scam was likely powered by AI capable of mimicking voices, which was used to impersonate the victim's daughter and convince the mother to send $15,000. This is a direct use of AI leading to harm (financial loss) through malicious use. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI system's misuse in voice synthesis for fraud.
Thumbnail Image

Dover woman loses $15K after scammers used artificial intelligence to impersonate daughter

2025-07-18
FOX 13 Tampa Bay
Why's our monitor labelling this an incident or hazard?
The article explicitly states that scammers used artificial intelligence to clone the daughter's voice, which was instrumental in deceiving the victim and causing her to lose $15,000. This constitutes direct harm to a person through malicious use of an AI system. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-enabled voice cloning scam.
Thumbnail Image

Distraught woman reveals how scammers conned her out of $15,000

2025-07-18
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The scammers used AI software to imitate the daughter's voice, which directly led to the victim losing $15,000. This constitutes harm to the person (financial loss and emotional trauma). The AI system's use was central to the scam's success, making it an AI Incident under the definition of harm caused by the use of an AI system.
Thumbnail Image

Retired Mother Gets Scammed Out of $15K After Hearing an AI-Generated Voice that Matched Her Daughter's

2025-07-18
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating a convincing voice impersonation was pivotal in enabling the scam, directly causing financial harm and emotional trauma to the victim. The event meets the criteria for an AI Incident because the AI system's use directly led to harm to a person (financial loss and emotional distress).
Thumbnail Image

Woman conned out of $15,000 after AI used to clone daughter's voice

2025-07-19
Irish Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the daughter's voice was AI-cloned and used by scammers to impersonate her, leading to the mother being conned out of $15,000. This constitutes direct harm caused by the use of an AI system (voice cloning) in a malicious context, fulfilling the criteria for an AI Incident due to harm to a person (financial loss and emotional distress).
Thumbnail Image

Sharon Brightwell Florida mom duped out of $15K in AI scam

2025-07-19
Scallywag and Vagabond
Why's our monitor labelling this an incident or hazard?
The article explicitly states that artificial intelligence was used to clone the daughter's voice, which was pivotal in convincing the victim to transfer money. This use of AI directly caused financial harm, fulfilling the criteria for an AI Incident under harm to persons (financial harm) and harm to communities (scam impact). The involvement of AI in the scam is central to the incident, not speculative or potential, and the harm has already occurred.
Thumbnail Image

Woman Conned Out of $15K After AI Cloned Her Daughter's Voice in Terrifying Scam: 'I Broke Down'

2025-07-20
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scam involved AI-cloned audio of the daughter's voice, which was used to trick the mother into sending money. The AI system's use directly caused financial harm and emotional distress, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's involvement is central to the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

'My daughter rang me in tears after horror car crash - then grim lies unravelled' - The Mirror

2025-07-21
Mirror
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning technology to impersonate a family member and commit fraud directly caused financial harm to the victim. The AI system's use was central to the scam's success, fulfilling the criteria for an AI Incident due to realized harm (financial loss) and violation of rights (fraud).
Thumbnail Image

Mum cruelly conned by her own daughter's distress call

2025-07-21
Perth Now
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the daughter's voice constitutes the involvement of an AI system. The scam directly caused financial harm (loss of $23,000) and emotional harm to the mother and family. The AI system's use was central to the scam's success, making this an AI Incident as the harm has already occurred and is directly linked to the AI system's misuse.
Thumbnail Image

Florida woman conned out of $15K after AI clones daughter's voice

2025-07-20
WKRN News 2
Why's our monitor labelling this an incident or hazard?
The article describes an AI-powered voice cloning used in a scam that caused direct financial harm to the victim. The AI system was used maliciously to impersonate the victim's daughter, leading to a fraudulent demand for money and resulting in actual financial loss. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial harm).
Thumbnail Image

AI voice cloning scam tricks concerned mom into sending ₹12.5 lakh: What you must watch out for

2025-07-22
HT Tech
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI voice cloning technology to impersonate a family member, which directly caused the victim to lose a significant amount of money. The AI system's role was pivotal in enabling the fraud, fulfilling the criteria for an AI Incident as it led to harm to a person (financial loss and emotional distress). The event is not merely a potential risk or a general update but a realized harm caused by AI misuse.
Thumbnail Image

Fraude telefónico en Inglaterra: cloraron la voz de su hija con IA y la estafaron con 15 mil dólares

2025-07-23
BioBioChile
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning technology to impersonate a person and commit fraud directly caused financial harm to the victim. The AI system's use in generating the fake voice was pivotal to the scam's success, fulfilling the criteria for an AI Incident involving harm to a person (financial harm).
Thumbnail Image

Le llaman por teléfono y cree que su hija ha tenido un accidente, cuando descubre la estafa ya es tarde: una IA clonó su voz

2025-07-23
as
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scammers used AI to clone the victim's daughter's voice to convincingly impersonate her and deceive the victim into transferring money. This is a direct use of AI technology leading to realized harm (financial loss) to a person. The AI system's involvement is clear and central to the incident, and the harm is actual and significant. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Su hija la llamó suplicándole ayuda en Florida: en realidad era una IA que la estafó con 15.000 dólares

2025-07-21
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The incident describes a fraud where AI-generated cloned voice was used to impersonate a family member, leading to a significant financial loss. The AI system's involvement in generating the cloned voice was pivotal in enabling the scam, which caused direct harm to the victim's property (money). Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the scam.
Thumbnail Image

Clonan la voz de su hija con IA y la estafan con 15 mil dólares: el caso que alerta sobre los nuevos fraudes tecnológicos

2025-07-21
RPP noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI voice cloning was used to replicate the victim's daughter's voice, which was pivotal in deceiving the victim and causing financial loss. This constitutes direct harm to a person (financial harm) through malicious use of an AI system. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm.
Thumbnail Image

Se hicieron pasar por su hija usando IA y le quitaron $15.000 en minutos, la estafa a una madre en EEUU

2025-07-21
Noticias de Venezuela y el Mundo - Caraota Digital
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning technology to impersonate the victim's daughter directly caused financial harm by tricking the victim into handing over $15,000. This is a clear case where the AI system's use led to realized harm (financial loss), fitting the definition of an AI Incident involving violation of property rights and harm to the individual. The AI system's role was pivotal in enabling the scam's success.