/cloudfront-us-east-1.images.arcpublishing.com/gray/5BA3W4LOLZBRRLTAMYRNZHYFOI.png)
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Scammers used AI-generated voice cloning to impersonate a Georgia woman's daughter, convincing her that her child had been kidnapped and demanding $50,000 ransom. The realistic AI voice caused severe emotional distress and near heart attack, highlighting the dangers of AI-enabled impersonation scams.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone a person's voice from a short audio snippet, which was then used to deceive the victim into believing her daughter was kidnapped. This is a clear example of an AI system's use leading directly to harm (emotional distress and potential financial loss), fitting the definition of an AI Incident. The AI system's role is pivotal in enabling the scam's convincing impersonation, which caused the harm. Therefore, this event qualifies as an AI Incident.[AI generated]