AI-Generated Voice Cloning Fuels Surge in Phone Scams and Deepfake Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals are increasingly using AI generative systems to clone voices of loved ones and public figures, enabling realistic phone scams and deepfake videos. These AI-driven deceptions have led to financial loss, emotional distress, and widespread misinformation, highlighting the growing harm caused by malicious use of AI voice and video synthesis.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake voice generation) in the malicious use phase, directly leading to harm to individuals through fraud and emotional distress, which qualifies as harm to persons (a). The article provides concrete examples of such incidents occurring, including a mother receiving a fake call from a deepfaked voice of her daughter asking for ransom. This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm. The article does not merely warn of potential harm or discuss responses, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the harm described.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketingFinancial and insurance servicesConsumer services

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyPsychologicalPublic interestReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Τεχνητή νοημοσύνη: Αυξάνονται οι απάτες με χρήση οικείων φωνών - "Μαμά με απήγαγαν" | in.gr

2023-07-15
in.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake voice generation) in the malicious use phase, directly leading to harm to individuals through fraud and emotional distress, which qualifies as harm to persons (a). The article provides concrete examples of such incidents occurring, including a mother receiving a fake call from a deepfaked voice of her daughter asking for ransom. This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm. The article does not merely warn of potential harm or discuss responses, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the harm described.
Thumbnail Image

Τεχνητή νοημοσύνη: Aπάτες με χρήση οικείων φωνών - "Μαμά με απήγαγαν" | Alfavita

2023-07-15
Alfavita
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems capable of generating realistic voice deepfakes, which are used maliciously in phone scams to deceive victims into paying money or panicking. This constitutes direct harm to individuals (emotional harm and financial fraud), fitting the definition of an AI Incident. The article provides concrete examples of such incidents occurring, not just potential risks, confirming the realized harm caused by AI misuse.
Thumbnail Image

Τεχνητή νοημοσύνη: Αυξάνονται οι απάτες με χρήση οικείων φωνών - "Μαμά με απήγαγαν"

2023-07-15
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake voice generation) used maliciously to impersonate loved ones in phone scams, directly causing harm to people (financial loss, emotional harm). The AI system's use is central to the incident, fulfilling the criteria for an AI Incident due to realized harm (fraud, deception). The article details actual occurrences of such scams, not just potential risks or general information, so it is not a hazard or complementary information but an AI Incident.
Thumbnail Image

Inteligencia Artificial: el cada vez más frecuente uso de voces familiares clonadas para estafas telefónicas (y cómo protegerse) - BBC News Mundo

2023-07-14
BBC
Why's our monitor labelling this an incident or hazard?
The event involves AI generative systems (deepfake voice cloning) used maliciously to conduct telephone scams that have directly harmed individuals by deceiving them into paying money under false pretenses. The AI system's use is central to the harm, as the realistic voice cloning increases the scam's effectiveness and victim trust. The article provides concrete examples of such incidents, including a mother receiving a fake call from a cloned voice of her daughter demanding ransom. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities through fraud and deception.
Thumbnail Image

El cada vez más frecuente uso de voces familiares clonadas con inteligencia artificial para realizar estafas telefónicas (y cómo protegerse)

2023-07-16
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create realistic voice deepfakes used in phone scams, which have directly led to harm by deceiving victims into paying ransoms or money under false pretenses. The AI system's use in cloning voices is central to the incident, causing realized harm (financial loss, emotional distress). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

El cada vez más frecuente uso de voces familiares clonadas con inteligencia artificial para realizar estafas telefónicas (y cómo protegerse)

2023-07-14
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI generative systems to create realistic voice clones used in telephone scams that have caused actual harm to victims, such as emotional distress and financial loss. The AI system's outputs were directly used to deceive and defraud people, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the scam. Hence, the event is classified as an AI Incident.
Thumbnail Image

¡Pon atención! Así utilizan la Inteligencia Artificial para lograr estafar a las personas

2023-07-17
FayerWayer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create realistic fake voices and videos for fraudulent purposes, which have already caused harm through scams and misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people through deception and fraud. The harm is realized, not just potential, and the AI system's role is pivotal in enabling these scams.
Thumbnail Image

El uso de voces familiares clonadas con inteligencia artificial para realizar estafas telefónicas

2023-07-17
El Mostrador
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone voices using deep learning algorithms trained on audio data. The malicious use of these AI-generated voice clones in phone scams has directly caused harm to individuals by deceiving them into paying ransoms or falling victim to fraud. The article provides concrete examples of such incidents, confirming realized harm. Hence, this qualifies as an AI Incident due to direct harm caused by the use of AI systems in fraudulent activities.
Thumbnail Image

El cada vez más frecuente uso de voces familiares clonadas con inteligencia artificial para realizar estafas telefónicas (y cómo protegerse)

2023-07-14
El Observador
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems (deepfake voice cloning) to conduct phone scams that have caused real harm to individuals (financial loss, emotional harm). The AI system's outputs are pivotal in enabling the deception, fulfilling the criteria for an AI Incident under harm to persons and communities. The article reports actual incidents, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

AI scam calls imitating familiar voices are a growing problem - here's how they work

2023-07-18
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create audio deepfakes that mimic familiar voices, which are then used in scam calls to deceive victims. These scams have directly caused harm by tricking people into paying money under false pretenses, fulfilling the criteria of injury or harm to persons and harm to communities. The AI system's use is central to the incident, as the deepfake voice technology enables the scam's effectiveness and increases the risk of victim compliance. Hence, this is an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

What is a deepfake fraud? How can we stay safe from it?

2023-07-18
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create convincing impersonations of victims' friends or family, which directly caused financial losses to the victims. This meets the definition of an AI Incident because the AI system's use directly led to harm (financial loss) to persons. The article also provides contextual information on the technology and safety measures, but the primary focus is on the realized harm caused by AI deepfake fraud.
Thumbnail Image

AI scam calls imitating familiar voices are a growing problem - here's how they work | Technology

2023-07-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create realistic voice deepfakes used in scam calls, which have directly caused harm to victims through deception and financial exploitation. The AI system's development and use are central to the scam's success, fulfilling the criteria for an AI Incident due to realized harm to people and communities. The article also provides concrete examples of such incidents, confirming that harm has occurred rather than being merely potential.
Thumbnail Image

Deepfake AI scam calls: How to protect yourself

2023-07-18
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for audio deepfakes) in scam calls that have directly led to harm (monetary loss and emotional distress). The AI system's misuse is central to the incident, as it enables scammers to convincingly impersonate victims' voices and deceive their family members. This fits the definition of an AI Incident because the AI's use has directly caused harm to people (harm to health in terms of emotional distress and financial harm).
Thumbnail Image

AI voice scams now a reality - Here's how they work | The Citizen

2023-07-18
The Citizen
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create audio deepfakes that have been used in scam calls to impersonate loved ones and extort money. This directly leads to harm to individuals (financial and emotional harm) and communities (increased misinformation and scams). The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as specific scam incidents are described.
Thumbnail Image

Deep Fake Scam is A Growing Issue: Know How They Operate

2023-07-19
DATAQUEST
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce convincing fake audio of individuals' voices, which is then used in scams such as virtual kidnappings to defraud victims. This constitutes direct harm to people through deception and financial loss, fulfilling the criteria for an AI Incident under harm to people and communities. The article details how these AI systems are used maliciously and the resulting harm, not just potential harm or general AI news, so it is classified as an AI Incident.
Thumbnail Image

Deepfake AI Scam Calls Are a Growing Problem - Here's How They Work

2023-07-18
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems used to create deepfake audio mimicking voices of victims' loved ones, which has directly led to scam calls causing financial and emotional harm. The AI system's role is pivotal in enabling these scams by producing convincing fake voices, leading to realized harm (financial loss, emotional distress). This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and communities. The article does not merely warn of potential harm but reports actual incidents, confirming realized harm.
Thumbnail Image

Deepfake AI Scam Calls Are a Growing Problem - Here's How They Work - Pehal News

2023-07-18
Pehal News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake audio that impersonates individuals' voices, which is explicitly described. The misuse of these AI systems has directly caused harm to people through scams involving ransom demands and financial fraud, fulfilling the criteria for harm to persons. The article details actual incidents where these harms have occurred, not just potential risks. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.