AI-Generated Deepfake Scam Defrauds Victims of 410 Million TL in Turkey

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Osmaniye, Turkey, a criminal group used AI to create deepfake images and voices of government officials, promoting fake investment schemes on social media. The scam resulted in 410 million TL in financial losses. Police arrested eight suspects, with five subsequently jailed.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate realistic fake images and voices (deepfake technology) to deceive people into fraudulent investment schemes. This use of AI directly led to significant financial harm (fraud involving 410 million TL) to individuals, which qualifies as harm to property and communities. Therefore, this is an AI Incident because the AI system's use directly caused harm through fraudulent activity.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Yapay zeka üzerinden yatırım dolandırıcılığı; 8 gözaltı - Sözcü Gazetesi

2026-02-11
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic fake images and voices (deepfake technology) to deceive people into fraudulent investment schemes. This use of AI directly led to significant financial harm (fraud involving 410 million TL) to individuals, which qualifies as harm to property and communities. Therefore, this is an AI Incident because the AI system's use directly caused harm through fraudulent activity.
Thumbnail Image

Osmaniye'deki "Yapay Zekalı ile Dolandırıcılık" Soruşturmasında 5 Tutuklama

2026-02-11
Haberler
Why's our monitor labelling this an incident or hazard?
The use of AI to mimic voices and images of public figures for fraudulent purposes directly led to financial harm to individuals, which constitutes harm to persons and communities. The AI system's use in the scam is central to the incident, fulfilling the criteria for an AI Incident due to realized harm caused by AI-enabled deception and fraud.
Thumbnail Image

Osmaniye'de yapay zekalı vurgun: 410 milyon TL'lik dolandırıcılığa 5 tutuklama | Video

2026-02-11
Sabah
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create deepfake content (voice and image impersonation) that was used maliciously to deceive people and cause significant financial harm. This constitutes direct harm to individuals (financial loss) and communities (trust erosion), fulfilling the criteria for an AI Incident. The involvement of AI in the fraudulent activity and the realized harm from the scam clearly classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yapay zeka ile vatandaşı dolandıran 5 kişi tutuklandı

2026-02-11
Sabah
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technologies to create deepfake content (voice and image imitation) to commit fraud, which directly caused harm to individuals through financial loss. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (financial harm) through deception and fraud.
Thumbnail Image

Yapay Zeka ile Dolandırıcılık: 8 Gözaltı - Son Dakika

2026-02-11
Son Dakika
Why's our monitor labelling this an incident or hazard?
The use of AI to mimic individuals' images and voices for fraudulent investment ads constitutes the use of an AI system in a harmful way. The resulting financial fraud and victim losses represent harm to people and communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm through deception and financial damage.
Thumbnail Image

Sosyal medyadan 410 milyonluk vurgun yaptılar! 5 kişi tutuklandı

2026-02-11
Türkiye
Why's our monitor labelling this an incident or hazard?
The use of AI technologies to generate fake images and voices of public figures for fraudulent investment schemes constitutes direct involvement of AI in causing harm. The harm here is financial fraud affecting many individuals, which is a significant harm to property and communities. Since the AI system's use directly led to realized harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Yapay zekalı vurgun: 410 milyon TL'lik dolandırıcılık! Görüntü ve sesleri taklit ettiler

2026-02-11
Günes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate synthetic audio and visual content for fraudulent purposes, leading to significant financial harm to individuals. This meets the criteria for an AI Incident because the AI system's use directly caused harm to property through deception and financial loss. The involvement of AI in the creation of fake content that led to realized harm classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Osmaniye'de sosyal medya üzerinden 410 milyonluk dolandırıcılığa 5 tutuklama

2026-02-11
Yenimeram.com.tr
Why's our monitor labelling this an incident or hazard?
The use of AI technologies to create fake images and voices of public figures for fraudulent purposes constitutes the use of an AI system leading directly to harm—in this case, significant financial harm to individuals who were deceived. This fits the definition of an AI Incident because the AI system's use directly caused harm (financial loss) to people. Therefore, the event is classified as an AI Incident.
Thumbnail Image

6 ilde "yapay zeka" dolandırıcılık operasyonu

2026-02-11
Yeniçağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The use of AI to create realistic fake voices and images of government officials to deceive people into fraudulent investments constitutes direct involvement of AI systems in causing harm. The harm here is financial loss to victims, which qualifies as harm to property. Since the AI system's use directly led to realized harm through fraud, this event meets the criteria of an AI Incident.