AI Voice Cloning Used in Silent Call Phone Scams in France

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A wave of phone scams in France involves scammers using AI-powered voice cloning. After recording victims' voices during silent calls, fraudsters use AI to clone these voices and impersonate victims, deceiving their contacts into transferring money. This malicious use of AI has led to financial and privacy harms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system that processes recorded voice samples to generate a cloned voice, which is then used by scammers to impersonate victims and request money from their contacts. This use of AI directly leads to harm through fraud and deception, fitting the definition of an AI Incident. The article describes realized harm (the scam) and the AI's pivotal role in enabling it.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityFinancial and insurance services

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Vous répondez, personne ne parle... mais votre voix est enregistrée : attention à l'arnaque des appels silencieux

2026-05-04
Ouest France
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that processes recorded voice samples to generate a cloned voice, which is then used by scammers to impersonate victims and request money from their contacts. This use of AI directly leads to harm through fraud and deception, fitting the definition of an AI Incident. The article describes realized harm (the scam) and the AI's pivotal role in enabling it.
Thumbnail Image

"Allô, allô ?" : méfiez-vous de ces arnaques téléphoniques boostées à l'IA, votre identité peut être volée

2026-05-05
actu.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voices used in phone scams to steal identities and commit fraud. The AI system's use directly leads to harm by enabling scammers to clone voices and deceive victims, causing violations of rights and financial harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities through identity theft and fraud.
Thumbnail Image

Un numéro inconnu s'affiche, vous décrochez mais personne ne répond... Derrière ces appels silencieux, une arnaque en plein essor

2026-05-06
lindependant.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly mentioned. The AI system's use directly leads to harm through fraud and identity theft, which are violations of rights and cause financial harm to individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm to people through scams and identity fraud.
Thumbnail Image

Attention à cette nouvelle arnaque téléphonique : un simple "allo" suffit aux escrocs pour cloner votre voix et tenter de berner vos proches

2026-05-04
midilibre.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone voices from recorded calls and then use these cloned voices to deceive and scam people, which is a direct harm to individuals. The AI system's use in generating the cloned voice is central to the scam's success, making this an AI Incident due to realized harm through malicious use of AI-generated voice cloning for fraud.
Thumbnail Image

Appel silencieux : comment les escrocs utilisent l'IA pour cloner votre voix

2026-05-04
timeline
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI specialized in voice cloning to reproduce victims' voices from recorded silent calls. This AI system is used maliciously to deceive victims' contacts into transferring money, constituting financial harm and violation of privacy rights. The harm is realized and directly linked to the AI system's use. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

Un simple " allô " peut suffire : c'est quoi l'arnaque aux appels silencieux, en plein essor avec l'intelligence artificielle ?

2026-05-06
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice synthesis tools) to exploit recorded voice samples obtained via automated calls. The AI system's use directly leads to identity theft and fraud, which are violations of personal rights and cause harm to individuals. The article describes ongoing harm and concrete misuse, not just potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vous recevez des appels sans personne au bout ? Méfiez-vous de cette arnaque qui se développe pour usurper votre voix

2026-05-06
Nice-Matin
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly mentioned. The AI system's use is central to the scam's operation, enabling realistic voice impersonation that leads to financial and privacy harms. These harms have either occurred or are ongoing as part of the scam. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people through fraud and identity theft.
Thumbnail Image

Appels téléphoniques silencieux : attention votre voix peut être clonée ! - ZDNET

2026-05-06
ZDNet
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI voice cloning technology being used in phone scams that have already caused harm, such as financial fraud and emotional manipulation. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The harms include violations of rights (fraud, deception), harm to individuals (emotional distress), and potential property/financial harm. The article does not merely warn about potential future harm but documents ongoing and realized harms due to AI misuse.
Thumbnail Image

Pourquoi ne faut-il jamais répondre " Allô " à un numéro inconnu ?

2026-05-06
Génération-NT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to clone voices from recorded samples obtained during silent calls. This cloning enables fraudulent activities such as identity theft and extortion, which constitute harm to persons and communities. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, not merely potential, as the article warns about actual scams and their consequences. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Attention : l'IA dope les arnaques téléphoniques et menace vos données personnelles

2026-05-06
24matins.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to clone or mimic human voices for malicious purposes, leading to realized harm such as identity theft and fraud. This constitutes a direct harm to individuals' personal data security and financial safety, fitting the definition of an AI Incident. The article details ongoing harm caused by AI-enabled voice spoofing scams, not just potential future risks or general information, so it is classified as an AI Incident.
Thumbnail Image

Appel silencieux : attention à cette nouvelle arnaque téléphonique qui clone votre voix

2026-05-06
SFR
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone victims' voices from recorded phone calls, which is a clear example of AI system use leading to harm (identity theft, fraud). The harm to individuals' security and privacy constitutes harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through the scam.
Thumbnail Image

Les appels silencieux sont-ils une arnaque pour voler votre voix et la cloner par IA ?

2026-05-07
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used to clone voices, which are then used in scams causing financial and emotional harm to victims. The silent calls serve as a method to facilitate these scams by confirming active numbers, indirectly supporting the AI-enabled fraud. Since the harm (financial fraud and emotional distress) is occurring due to the AI system's use, this qualifies as an AI Incident under the framework, specifically harm to persons and communities through deception and fraud.
Thumbnail Image

Attention à si vous recevez un appel téléphonique qui sans réponse: il s'agit d'une arnaque avec l'IA en embuscade

2026-05-07
DH.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to copy voice characteristics and the use of automated systems to make calls, which indicates AI system involvement. The harm includes fraud, privacy violations, and potential financial loss, which are direct harms to individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through malicious exploitation of voice cloning and automated calling for scams.
Thumbnail Image

Un simple "allô" peut vous piéger ? Cette arnaque discrète inquiète les experts en cybersécurité

2026-05-07
La Libre.be
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning AI) to exploit voice samples obtained through automated calls. The AI's role is pivotal in enabling the fraudsters to impersonate victims convincingly, which can directly lead to harm such as financial fraud and privacy violations. Since the harm is occurring or highly likely to occur as described, this qualifies as an AI Incident under the framework, specifically relating to violations of rights and harm to individuals through malicious use of AI.
Thumbnail Image

Attention : cette arnaque au téléphone prend de l'ampleur, voici le...

2026-05-07
Futura
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone voices from brief phone call interactions, which is then used to impersonate victims and commit fraud, such as the 'fraud to the president' scam leading to financial losses. This constitutes direct harm to individuals and organizations through identity theft and financial fraud. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses mitigation strategies but the primary focus is on the realized harm caused by AI-enabled voice cloning scams.
Thumbnail Image

Appels silencieux : tout savoir sur cette nouvelle arnaque qui pourrait vous coûter cher

2026-05-07
Europe 1
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to clone victims' voices from silent calls, which is a direct misuse of AI technology causing harm to individuals by enabling further scams and fraud. The article details how the AI system's use in voice cloning is part of the scam mechanism, leading to realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

Appels sans réponse : cette arnaque grandissante pourrait déjà enregistrer votre voix à votre insu

2026-05-07
Melty.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-powered automated calling systems to identify active phone numbers and record voice samples, which are then exploited by AI voice cloning technologies to commit fraud. This has resulted in actual financial harm to victims, fulfilling the criteria for an AI Incident. The harm is direct and realized, involving violations of personal security and financial loss. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" J'ai juste décroché et dit allô " : attention, cette nouvelle arnaque aux appels silencieux utilise l'IA pour cloner votre voix

2026-05-07
Econostrum | Toute l'actualité économique en Méditerranée
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to clone voices from recorded samples obtained via silent calls. This AI-enabled voice cloning is then used for identity theft and fraud, which constitutes harm to individuals (privacy violations, financial scams). The AI system's use in cloning voices and enabling impersonation directly leads to these harms, qualifying this event as an AI Incident under the definitions provided.
Thumbnail Image

Attention : une arnaque se satisfait même d'un "allo"

2026-05-07
Economie Matin
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to clone voices from brief audio samples obtained during silent phone calls. This AI-enabled voice cloning is then exploited to commit fraud, causing direct financial harm to victims. The article reports actual realized harm (financial losses) and describes how the AI system's outputs are pivotal to the scam's success. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to property and individuals.
Thumbnail Image

Comment les arnaques fonctionnent avec un simple "allo"

2026-05-07
Les Smartgrids
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for voice cloning and synthetic voice generation to perpetrate phone scams that have already caused financial losses. The AI system's use in capturing voice samples and generating realistic fake voices directly contributes to the harm (fraud and financial theft). This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial harm and violation of trust). The article also discusses regulatory and preventive measures, but the primary focus is on the realized harm caused by AI-enabled scams.
Thumbnail Image

"여보세요" 한마디에 털린다?...AI 음성복제 신종 사기 주의보 - 매일경제

2026-05-07
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based voice cloning) to commit fraud by replicating victims' voices from short audio samples. The scam leads to direct harm through impersonation and financial fraud, which fits the definition of an AI Incident as the AI system's use directly leads to harm. The article warns about realized and ongoing scams using this technology, not just potential future risks, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

모르는 번호에 "여보세요"...음성복제 노린 신종 피싱 주의 | 연합뉴스

2026-05-07
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used maliciously to clone voices from unsuspecting victims, leading to direct harm such as identity theft, fraud, and security breaches. The article explicitly states that AI is used to replicate voices from short audio samples, which are then exploited for scams and bypassing security systems. This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm (fraud, identity theft risks) and violations of rights (privacy and security).
Thumbnail Image

"여보세요"했더니 전화 끊겼다...'침묵 전화' 주의해야 하는 이유

2026-05-07
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI for voice cloning, which is explicitly mentioned. The AI system is used maliciously to clone voices from brief audio samples, enabling phishing scams and fraud that harm individuals financially and violate their rights. The article reports that these harms are occurring or have occurred, not just potential risks. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI-enabled voice cloning in phishing attacks.
Thumbnail Image

모르는 번호 전화, 대답도 '금물'...음성복제 노린다

2026-05-07
Wow TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice replication based on short voice samples obtained through phone calls. This AI-enabled voice cloning is used maliciously to impersonate victims, leading to identity theft and fraud, which are clear harms to individuals' rights and security. Since the harm is occurring or has occurred (voice cloning and subsequent fraud attempts), this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

모르는 번호에 "여보세요?" 했다가...내 목소리 복제될 수도

2026-05-08
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to clone human voices from recorded samples obtained through silent calls. The AI's development and use in this context directly contribute to harms such as fraud, identity theft, and potential breaches of privacy and security. Since these harms are either occurring or highly plausible based on the described scam, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to significant harm to individuals and communities.
Thumbnail Image

"모르는 번호 받았다가 큰일 난다"⋯"여보세요"도 AI가 복제해 범죄 악용

2026-05-08
아이뉴스24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly mentioned. The AI system's use in replicating voices is directly linked to harm, including fraud and deception leading to financial and emotional harm to victims. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons through criminal misuse.
Thumbnail Image

"여보세요?" 모르는 번호에 먼저 답하면 안되는 섬뜩한 이유

2026-05-08
문화일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deep voice cloning technology) in the malicious use phase, where AI-generated voice replicas are used to commit fraud and financial scams. The harm (financial loss due to voice phishing) is realized and ongoing, as evidenced by reported damages and expert warnings. The AI system's role is pivotal in enabling the scam by replicating voices convincingly, which directly leads to harm to individuals (financial harm) and communities (wider societal impact). Hence, this qualifies as an AI Incident.
Thumbnail Image

여보세요?" 한마디 했더니 '뚝'...AI가 내 목소리 훔쳐간다

2026-05-09
서울신문
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly mentioned. The AI system's use (voice cloning) is directly linked to ongoing harm in the form of voice phishing scams causing financial losses and emotional harm to victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial and psychological harm) and violations of rights (fraud and deception).