AI-Generated Voice Used in Scam Call to Defraud Elderly Woman in Germany

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Lichtenfels, Germany, scammers used AI-generated voices to impersonate a woman's daughter during phone calls, falsely claiming she caused a fatal accident and demanding a €45,000 ransom. The AI-enabled deception targeted both the elderly woman and her son, causing psychological distress and attempted financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the voices of the victim's relatives were faked using AI technology during the scam call. This use of AI voice cloning directly enabled the fraud attempt, which is a form of harm to the individual (financial and psychological). Even though the scam was detected and prevented, the AI system's malicious use led to an incident of attempted harm. Therefore, this qualifies as an AI Incident under the definition of harm caused by the use of an AI system.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRobustness & digital securityHuman wellbeingRespect of human rights

Industries
Digital security

Affected stakeholders
General publicWomen

Harm types
PsychologicalEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Bayern: Betrug mit KI-Stimme? Schockanruf bei Seniorin

2025-09-06
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the voices of the victim's relatives were faked using AI technology during the scam call. This use of AI voice cloning directly enabled the fraud attempt, which is a form of harm to the individual (financial and psychological). Even though the scam was detected and prevented, the AI system's malicious use led to an incident of attempted harm. Therefore, this qualifies as an AI Incident under the definition of harm caused by the use of an AI system.
Thumbnail Image

Kriminalität: Betrug mit KI-Stimme? Schockanruf bei Seniorin

2025-09-06
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the voices of the relatives were faked using AI technology during the phone call, which was a key factor in the attempted fraud. This constitutes a direct use of an AI system to cause harm (financial fraud attempt) to a person. Even though the harm was averted, the AI system's involvement in the fraudulent scheme is clear and the event qualifies as an AI Incident due to the direct link to an attempted harm involving deception and potential financial loss.
Thumbnail Image

Schockanruf bei Seniorin in Oberfranken: Betrügerin fälscht Stimme der Tochter mit KI

2025-09-06
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate synthetic voices of the victim's relatives, which was central to the scam attempt. This use of AI directly led to an attempt to cause financial harm and emotional distress to the victims. Even though the scam was ultimately unsuccessful, the AI system's misuse was a direct factor in the incident. Therefore, this qualifies as an AI Incident due to the realized attempt of harm through AI-enabled voice forgery.
Thumbnail Image

Betrug mit KI-Stimme? Schockanruf bei Seniorin

2025-09-06
stern.de
Why's our monitor labelling this an incident or hazard?
The use of AI to synthesize or imitate the voices of the victim's relatives during the phone call is explicitly mentioned. This AI-generated voice manipulation was used maliciously to induce the victim to pay a large sum of money, constituting a direct harm (financial harm) to the person targeted. Although the scam was ultimately unsuccessful, the AI system's use in the fraud attempt directly led to the harm attempt and psychological distress. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through fraudulent use.
Thumbnail Image

KI-Stimme von Tochter? Bei Schockanruf 45.000 Euro gefordert

2025-09-06
Bayerischer Rundfunk
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scammers used AI to fake the voice of the daughter, which was played during the phone calls to convince the elderly victims. This use of AI directly facilitated a fraud attempt that could cause significant financial and psychological harm to the victims. Although the victims did not fall for the scam, the event involves realized malicious use of AI with direct potential for harm, fitting the definition of an AI Incident.
Thumbnail Image

Schockanruf mit KI-Stimme von Tochter? Betrüger fordern 45.000 Euro "Kaution"

2025-09-06
TAG24
Why's our monitor labelling this an incident or hazard?
The use of AI to generate the voices of the victims' relatives was central to the scam, enabling the fraudsters to convincingly impersonate them and demand a large ransom. This directly led to financial harm (a form of harm to persons) and psychological distress to the victims. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through fraudulent activity.
Thumbnail Image

Oberfranken: Betrug mit KI-Stimme? Schockanruf bei Seniorin

2025-09-06
Frankenpost
Why's our monitor labelling this an incident or hazard?
The article implies the use of AI-generated voice technology to impersonate the daughter during the scam call, which directly led to an attempt to defraud and emotionally harm the elderly woman. Since the AI system's use in generating the voice is central to the incident and the harm is realized (the scam attempt), this qualifies as an AI Incident under the framework.
Thumbnail Image

Kriminalität: Betrug mit KI-Stimme? Schockanruf bei Seniorin - Neue Presse Coburg

2025-09-06
Neue Presse Coburg
Why's our monitor labelling this an incident or hazard?
The use of an AI-generated or AI-manipulated voice to impersonate a family member in a scam call constitutes the use of an AI system in a harmful way. This directly led to psychological harm (shock) and financial harm (attempted extortion) to the victim. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in the scam.