
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
In Lichtenfels, Germany, scammers used AI-generated voices to impersonate a woman's daughter during phone calls, falsely claiming she caused a fatal accident and demanding a €45,000 ransom. The AI-enabled deception targeted both the elderly woman and her son, causing psychological distress and attempted financial harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the voices of the victim's relatives were faked using AI technology during the scam call. This use of AI voice cloning directly enabled the fraud attempt, which is a form of harm to the individual (financial and psychological). Even though the scam was detected and prevented, the AI system's malicious use led to an incident of attempted harm. Therefore, this qualifies as an AI Incident under the definition of harm caused by the use of an AI system.[AI generated]