AI-Driven Scams Exploit Deepfake and Impersonation Techniques in Turkey

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals in Turkey are increasingly using AI technologies, including deepfake and impersonation tools, to perpetrate sophisticated scams, threatening individuals' financial and personal security. Experts warn the public to avoid sharing personal information online and highlight the urgent need for awareness and protective measures against AI-enabled fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake, AI algorithms analyzing human behavior) to perpetrate fraud, which directly leads to harm to individuals (financial and psychological harm from scams). Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm. The article does not merely warn about potential harm but describes ongoing fraudulent activities using AI, fulfilling the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyRespect of human rightsTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketingFinancial and insurance services

Affected stakeholders
General public

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Yapay zeka yönetmeliği çevresinde ele alınması gereken kritik noktalar

2023-08-18
Webrazzi
Why's our monitor labelling this an incident or hazard?
The content centers on policy discussions, regulatory proposals, and advocacy for changes in AI legislation. There is no mention of any specific AI system causing harm or posing a direct or plausible risk of harm. The article does not describe any incident or hazard involving AI systems but rather provides contextual information about AI governance and legal frameworks. Therefore, it qualifies as Complementary Information.
Thumbnail Image

"Yapay zeka dolandırıcılığına karşı sosyal medyadan kişisel bilgilerinizi paylaşmayın"

2023-08-19
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake, AI algorithms analyzing human behavior) to perpetrate fraud, which directly leads to harm to individuals (financial and psychological harm from scams). Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm. The article does not merely warn about potential harm but describes ongoing fraudulent activities using AI, fulfilling the criteria for an AI Incident.
Thumbnail Image

Yapay Zeka Dolandırıcılarına Karşı Dikkatli Olunmalı

2023-08-19
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously for fraud (e.g., deepfake, chatbot scams) which cause harm to individuals by stealing financial and personal information. However, it does not describe a particular realized event or incident of harm but rather warns about the ongoing threat and advises caution. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no specific incident is detailed. It also discusses mitigation efforts and awareness, but the main focus is on the potential for harm rather than a concrete incident or complementary information about a past event.
Thumbnail Image

Yapay zeka dolandırıcılığına karşı "sosyal medya" uyarısı

2023-08-19
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake, chatbots, automated calls) by malicious actors to commit fraud, which directly leads to harm to individuals (financial harm and deception). This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception). The article describes realized harm rather than just potential harm, and the AI system's role is pivotal in enabling these scams. Therefore, this is classified as an AI Incident.
Thumbnail Image

Yapay zeka dolandırıcılığına karşı "sosyal medya" uyarısı - Ankara Haberleri

2023-08-19
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., deepfake generation, chatbots) by malicious actors to perpetrate fraud, which directly leads to harm to individuals through financial and personal information theft. This constitutes an AI Incident because the AI system's use has directly led to realized harm. The article focuses on the ongoing harm caused by AI-enabled scams and the need for mitigation, rather than just potential future harm or general information about AI.
Thumbnail Image

Şimdi de bu çıktı... Yapay zekayla dolandırıcılık.... Nereden besleniyorlar

2023-08-20
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake, chatbots, automated calls) by malicious actors to commit fraud, which directly harms individuals financially and through deception. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception). The article does not describe a potential or future risk only, but ongoing harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

Google açıkladı! Yapay zeka kişisel yaşam koçluğu yapacak - Türkiye Gazetesi

2023-08-19
Türkiye
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved (DeepMind's generative AI for personal coaching). The event concerns the use and development of this AI system. No direct or indirect harm has been reported yet, but ethical concerns and potential risks are highlighted, indicating plausible future harm. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information, as the main focus is on potential risks rather than realized harm or responses to past incidents.
Thumbnail Image

Yapay zeka destekli haber merkezleri geliyor!

2023-08-18
Teknolojioku
Why's our monitor labelling this an incident or hazard?
The article describes the release of AI usage standards by AP and discusses the broader context of AI-generated content in media. There is no direct or indirect harm caused by AI systems reported here, nor is there a plausible future harm event described. The main focus is on the governance approach and editorial guidelines, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Büyük reklamverenler de yapay zekaya yöneliyor

2023-08-19
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems in advertising and marketing but does not report any realized harm or incidents resulting from these AI systems. It highlights concerns about potential biases and legal issues but does not describe any event where these concerns have materialized into harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI adoption, industry perspectives, and governance considerations, fitting the definition of Complementary Information.
Thumbnail Image

AP'den 'Yapay Zeka' hamlesi

2023-08-18
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The article primarily details AP's policy and governance measures regarding AI use in journalism, including ethical guidelines and staff training. It does not describe any realized harm or direct/indirect incident caused by AI systems, nor does it highlight a credible imminent risk of harm. Therefore, it fits the definition of Complementary Information, as it provides context and response to AI developments rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Yapay zeka, yapay zekaya karşı! - Yeni dolandırıcılık yollarına dikkat!

2023-08-19
YENİ ASYA - Gerçekten haber verir
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being used maliciously to perpetrate fraud through impersonation and deepfake techniques, leading to harm to individuals' financial and personal security, which fits the definition of an AI Incident due to violations of rights and harm to communities. The article also discusses mitigation efforts using AI, but the primary focus is on the realized harm caused by AI-enabled scams.