AI Voice Cloning Used in Fraudulent Scams in Turkey

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kayseri University cybersecurity expert Dr. Ali Gezer warned that fraudsters in Turkey are using AI-based neural networks to clone voices and commit scams. By capturing voice samples from social media or phone calls, attackers impersonate victims to deceive others into transferring money or sharing sensitive information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI systems (neural network-based voice cloning) to clone voices and commit fraud, which directly leads to harm (financial and emotional harm to victims). The expert warns about actual cases and methods used by attackers, indicating that harm is occurring or has occurred. Therefore, this is an AI Incident due to the realized harm caused by the malicious use of AI voice cloning technology.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityTransparency & explainability

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Yapay zekâ ile ses klonlama dolandırıcılığına dikkat! Uzmanından uyarı geldi!

2025-12-18
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (neural networks for voice cloning) that can be exploited by criminals to commit fraud, which is a form of harm to individuals (harm to persons through deception and potential financial or reputational damage). Although the article is a warning and does not describe a specific realized incident, the described use of AI for voice cloning fraud constitutes a credible risk of harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm through fraud.
Thumbnail Image

Uzmanından "yapay zeka ile ses klonlama dolandırıcılığı" uyarısı - Sözcü Gazetesi

2025-12-18
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (neural network-based voice cloning) to clone voices and commit fraud, which directly leads to harm (financial and emotional harm to victims). The expert warns about actual cases and methods used by attackers, indicating that harm is occurring or has occurred. Therefore, this is an AI Incident due to the realized harm caused by the malicious use of AI voice cloning technology.
Thumbnail Image

Uzmanından "yapay zeka ile ses klonlama dolandırıcılığı" uyarısı

2025-12-18
Haberler
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (neural networks for voice cloning) used maliciously to commit fraud, causing direct harm to people (financial loss, emotional distress) and potentially to organizations. The article details how the AI system's outputs (cloned voices) are used in scams, fulfilling the criteria for an AI Incident under the definitions provided. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the fraud.
Thumbnail Image

Uzmanından "yapay zeka ile ses klonlama dolandırıcılığı" uyarısı

2025-12-18
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (neural networks for voice cloning) used maliciously to commit fraud, causing direct harm to people by impersonation and deception. The article details how the AI system's outputs (cloned voices) are used to trick victims into transferring money or releasing sensitive information, fulfilling the criteria for an AI Incident due to realized harm. The presence and use of AI in the fraud are clear and central to the harm described.
Thumbnail Image

Uzmanından "yapay zeka ile ses klonlama dolandırıcılığı" uyarısı - Kayseri Haberleri

2025-12-18
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (artificial neural networks) for voice cloning, which is an AI system. The expert warns that fraudsters are using this technology to clone voices and commit scams, indicating a credible risk of harm to individuals through deception and fraud. However, the article does not describe a specific case where harm has already occurred, only the potential for such harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the warning about potential harm, not on updates or responses to past incidents. It is not Unrelated because the event is clearly AI-related and involves plausible harm.
Thumbnail Image

Yapay Zeka ile Ses Klonlama Uyarısı

2025-12-18
Son Dakika
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (neural networks for voice cloning) in fraudulent activities that have directly led to harm, namely financial scams and deception of individuals. The AI system's use in cloning voices is central to enabling these harms. Therefore, this constitutes an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals through fraud.