AI-Generated Deepfake Voice Used in Fraud Targeting Turkish Officials and Businesspeople

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A fraudster used AI to clone President Erdoğan's voice, making deceptive calls to businesspeople and senior officials for financial gain. The scheme, involving over 10 phone lines, was uncovered by Turkey's National Intelligence Organization (MİT), which identified and apprehended the suspect. Authorities warn of increasing AI-driven identity fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes an individual using AI to mimic President Erdoğan's voice to deceive and defraud victims, which constitutes direct harm to individuals (financial harm) and potentially to communities (trust and security). The AI system's use in voice synthesis is central to the incident, and the harm has already occurred as the scam attempts were made. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securityTransparency & explainabilitySafetyRespect of human rightsHuman wellbeing

Industries
Government, security, and defenceDigital securityBusiness processes and support services

Affected stakeholders
GovernmentBusiness

Harm types
Economic/PropertyReputationalPublic interestPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

MİT'ten 'sahte Erdoğan' operasyonu

2023-08-17
Özgür Kocaeli Gazetesi
Why's our monitor labelling this an incident or hazard?
The article describes an individual using AI to mimic President Erdoğan's voice to deceive and defraud victims, which constitutes direct harm to individuals (financial harm) and potentially to communities (trust and security). The AI system's use in voice synthesis is central to the incident, and the harm has already occurred as the scam attempts were made. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

Dolandırıcı yakalandı: Yapay zekâyla Erdoğan'ı taklit etmiş

2023-08-18
Hürriyet
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes constitutes the use of an AI system leading directly to harm (financial and reputational harm to victims). This fits the definition of an AI Incident because the AI system's use directly caused harm through deception and fraud. The event is not merely a potential risk or a complementary update but a realized harm involving AI misuse.
Thumbnail Image

Dolandırıcıyı MİT yakalattı! Erdoğan'ın sesini yapay zeka ile taklit edip kandırmaya çalıştı

2023-08-17
Hürriyet
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes directly led to harm by attempting to deceive and potentially defraud victims. The AI system's use in this criminal activity constitutes an AI Incident because the AI system's use directly contributed to a violation of rights and harm to individuals targeted by the scam. The event involves the use of AI (voice imitation) in a harmful way, and the harm is realized (attempted fraud).
Thumbnail Image

Başkan Erdoğan'ın sesini taklit ederek dolandırıcılık yapanlara MİT operasyonu - Yeni Akit

2023-08-17
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article describes a case where an AI system was used to mimic President Erdoğan's voice to deceive and defraud individuals. This use of AI directly led to harm through fraud, which constitutes a violation of rights and causes significant harm to individuals. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

Erdoğan'ın sesiyle böyle dolandırmaya kalkmış: 'Operasyon için para lazım'

2023-08-19
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI applications to mimic a person's voice for fraudulent purposes, which directly led to financial harm to victims. The AI system's use in this criminal activity fulfills the criteria for an AI Incident, as the AI's misuse directly caused harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Son dakika... MİT'ten siber operasyon! Erdoğan'ın sesini yapay zeka ile taklit eden yakalandı

2023-08-17
Milliyet
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI application was used to mimic President Erdoğan's voice to defraud people, which is a direct misuse of AI leading to harm (fraud). This fits the definition of an AI Incident because the AI system's use directly caused harm to persons through deception and financial exploitation. The involvement of AI in the fraudulent activity and the resulting harm to victims justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit eden zanlıya ilişkin detaylar ortaya çıktı

2023-08-18
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the suspect used AI to mimic the president's voice to deceive and attempt to defraud individuals, which is a direct misuse of an AI system leading to harm (attempted fraud). Even though the AI-enabled fraud attempts were unsuccessful, the use of AI in this criminal context constitutes an AI Incident because it involves direct harm or attempted harm through AI misuse. The other fraudulent activities mentioned are not AI-related, but the AI voice cloning attempt itself is sufficient to classify this as an AI Incident.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit edip dolandırıcılık yapmaya çalışan kişiyi, MİT yakaladı

2023-08-17
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI application was used to imitate the president's voice to conduct fraudulent calls targeting businesspeople and officials, which is a direct misuse of AI leading to harm (fraud). This fits the definition of an AI Incident because the AI system's use directly led to harm through deception and attempted financial crime. The involvement of AI in the fraud and the resulting harm to individuals justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Son dakika: Yapay zeka ile Cumhurbaşkanı taklidiyle dolandırıcılık! 1 gözaltı

2023-08-17
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes constitutes direct involvement of an AI system in causing harm (fraud and deception). This meets the criteria for an AI Incident as the AI system's use directly led to harm to individuals (victims of fraud). The event involves the use of AI technology in a malicious way causing realized harm, not just a potential risk or complementary information.
Thumbnail Image

MİT, Cumhurbaşkanı Erdoğan'ın sesini taklitle dolandırıcılık yapan kişiyi yakalattı - Ankara Haberleri

2023-08-17
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly states that artificial intelligence was used to imitate the president's voice to deceive and defraud people, which directly caused harm. This meets the criteria for an AI Incident because the AI system's use led to realized harm (fraud) and violation of rights. The involvement of AI in the fraudulent activity is clear and central to the event, and the harm is actual, not just potential.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit eden dolandırıcı yakalandı

2023-08-17
TRT haber
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate the president's voice for fraudulent calls constitutes direct misuse of an AI system leading to harm (fraud and deception). This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (victims of fraud attempts) and breaches legal and ethical norms. The event involves the use of AI in a harmful way, not just potential or hypothetical risk, and thus is classified as an AI Incident.
Thumbnail Image

MİT'ten Cumhurbaşkanı Erdoğan'ın sesiyle dolandırıcılık operasyonu

2023-08-17
NTV
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI applications to imitate the president's voice for fraudulent purposes, leading to realized harm (fraud attempts) against individuals. This fits the definition of an AI Incident because the AI system's use directly led to harm through deception and attempted financial exploitation. The capture of the perpetrator does not negate the fact that harm occurred or was attempted via AI misuse.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit ederek dolandırıcılık yapan kişi tutuklandı

2023-08-18
NTV
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes constitutes the use of an AI system leading directly to harm (financial and reputational) to individuals targeted by the scam. This fits the definition of an AI Incident because the AI system's use was instrumental in causing harm through deception and fraud. The event involves the use of AI in a malicious way that resulted in actual harm, not just a potential risk or a general update.
Thumbnail Image

Son dakika... MİT devreye girdi kaçamadılar: Erdoğan'ın sesini taklit eden dolandırıcılar yakalandı - Son Dakika Flaş Haberler

2023-08-17
CNN Türk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to generate a synthetic voice for fraudulent purposes, directly causing harm through deception and potential financial or reputational damage to victims. This meets the criteria for an AI Incident as the AI system's use directly led to harm (fraud).
Thumbnail Image

Yapay Zekâyla Cumhurbaşkanı'nın Sesi Taklit Eden Dolandırıcı

2023-08-17
Webtekno
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI for voice synthesis) to impersonate a public figure's voice, which was used to deceive and attempt to defraud individuals. This constitutes a violation of rights and causes harm to individuals and communities through fraudulent activity. The AI system's use directly led to this harm, qualifying the event as an AI Incident under the framework. The arrest and law enforcement response confirm the harm was realized, not just potential.
Thumbnail Image

Milli İstihbarat Teşkilatı'ndan operasyon: Erdoğan'ın sesini taklit ederek dolandırıcılık yapan kişiyi yakalattı

2023-08-17
Yeni Çağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice for fraudulent purposes constitutes an AI system's use leading directly to harm (fraud and deception). This fits the definition of an AI Incident because the AI system's use caused violations of rights and harm to people targeted by the scam. The article describes an actual event where harm was attempted and the suspect was caught, confirming realized harm rather than a hypothetical risk.
Thumbnail Image

Dolandırıcıyı MİT yakalattı! Cumhurbaşkanı Erdoğan'ın sesini yapay zeka ile taklit edip kandırmaya çalıştı

2023-08-17
Vatan
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes constitutes direct involvement of an AI system in causing harm (fraud and deception). The event clearly states that the AI-generated voice was used to deceive and attempt to gain benefits unlawfully, which is a violation of rights and causes harm to individuals targeted. Therefore, this qualifies as an AI Incident under the framework because the AI system's use directly led to harm through criminal activity.
Thumbnail Image

Yapay zekayla Erdoğan'ın sesini taklit etti

2023-08-18
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate a synthetic voice for fraudulent purposes, directly leading to harm through attempted scams (harm to persons/groups). The AI system's use in voice imitation is central to the incident, and the harm is realized as the scam attempts occurred. Therefore, this qualifies as an AI Incident.
Thumbnail Image

MİT'ten 'sahte Erdoğan' operasyonu

2023-08-17
birgun.net
Why's our monitor labelling this an incident or hazard?
The use of AI to mimic a person's voice for fraudulent purposes constitutes an AI system's use leading directly to harm, specifically financial and reputational harm to individuals targeted by the scam. The event describes realized harm through the fraudulent activity enabled by AI voice synthesis, which fits the definition of an AI Incident due to violations of rights and harm to individuals and communities. The capture of the suspect is a response but does not negate the incident classification.
Thumbnail Image

Erdoğan'ın Sesini Yapay Zeka ile Taklit Eden Dolandırıcı Yakalandı

2023-08-17
tamindir.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to imitate a person's voice, which is a clear AI system application. The AI system was used maliciously to deceive and attempt to defraud individuals, which constitutes a violation of rights and causes harm to people (financial and trust harm). Although the fraudster was caught before widespread harm, the AI system's use directly led to criminal attempts to cause harm, qualifying this as an AI Incident rather than a hazard or complementary information. The involvement of AI in the fraudulent activity and the direct link to harm justify this classification.
Thumbnail Image

Deepfake MİT'e takıldı

2023-08-17
Ak�am
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to clone and mimic the president's voice to deceive victims, which constitutes an AI system's use leading directly to harm (fraud). The harm includes violation of trust and potential financial loss to victims, fitting the definition of an AI Incident. The involvement of AI in the fraudulent calls and the resulting harm to people justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Erdoğan'ın sesi" dolandırıcılığa karıştı

2023-08-17
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI applications to mimic a person's voice for fraudulent purposes, which constitutes the use of an AI system. The fraudulent calls caused harm to individuals by attempting to deceive them for financial gain, which qualifies as harm to persons or groups. Therefore, this is an AI Incident because the AI system's use directly led to realized harm through deception and fraud attempts.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesiyle dolandırıcılık! MİT kıskıvrak yakaladı

2023-08-17
Ulusal Kanal
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice synthesis) to commit fraud, which directly caused harm to people (financial and reputational harm to businesspeople and officials). This meets the definition of an AI Incident because the AI system's use directly led to harm through deception and fraud. The apprehension of the perpetrator is a response but does not negate the incident classification.
Thumbnail Image

Yapay zeka MİT'e yakalandı

2023-08-17
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The use of AI to impersonate a person's voice for fraudulent calls directly led to harm by enabling scams targeting individuals, which constitutes harm to persons and communities. The AI system's use was central to the incident, as the voice imitation was AI-generated, facilitating the deception. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit ederek dolandırıcılık yapmaya çalışan şüpheli tutuklandı

2023-08-18
Elbistanın Sesi
Why's our monitor labelling this an incident or hazard?
The suspect used AI-based voice imitation technology to deceive and defraud victims, which is a direct harm caused by the AI system's misuse. The fraudulent activity led to realized harm (financial and trust damage), fulfilling the criteria for an AI Incident. The involvement of AI in the voice imitation is explicit, and the harm is materialized through the fraud attempts.
Thumbnail Image

Erdoğan'ın sesini kullanan dolandırıcı tutuklandı

2023-08-18
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a synthetic voice of a public figure for fraudulent purposes, which directly caused harm by attempting to deceive and defraud individuals. This constitutes a violation of rights and causes harm to individuals targeted by the scam, fitting the definition of an AI Incident.
Thumbnail Image

Erdoğan'ın sesini taklit ederek dolandırıcılık yapan şahıs tutuklandı

2023-08-18
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to imitate Erdoğan's voice to deceive and attempt to defraud people, which is a direct misuse of AI technology causing harm (financial and reputational). Although the exact victims and amounts are not fully identified, the fraudulent use of AI-generated voice to obtain money is a clear harm under the AI Incident definition, specifically harm to persons and communities through deception and financial exploitation.
Thumbnail Image

CUMHURBAŞKANI ERDOĞAN'IN SESİNİ TAKLİT EDEN DOLANDIRICI TUTUKLANDI

2023-08-18
Haberler
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to generate a synthetic voice for fraudulent purposes, which is a direct misuse of an AI system leading to harm (fraud and deception). This fits the definition of an AI Incident because the AI system's use directly led to harm through criminal activity. The presence of the AI system is clear (voice synthesis AI), and the harm is realized (fraud attempts). Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Erdoğan'ın sesini yapay zeka ile taklit edip işadamlarını aradı! "Yurt dışında operasyon yapıyoruz, para lazım!"

2023-08-18
İnternethaber
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a person's voice for fraudulent purposes constitutes misuse of an AI system. Although the harm (financial loss) was not realized, the event demonstrates a credible risk of harm through AI-enabled social engineering fraud. Since no actual harm occurred but there is a plausible risk of harm, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit ederek dolandırıcılık yapmaya çalışan şüpheli tutuklandı

2023-08-18
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The suspect used AI-generated voice imitation to impersonate the president and defraud people, which is a direct misuse of AI technology causing harm. Although the exact amount of financial harm from the voice fraud is not yet fully determined, the fraudulent activity and the use of AI for deception are clear. This fits the definition of an AI Incident because the AI system's use directly led to harm (fraud and financial loss).
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit ederek dolandırıcılık yapmaya çalıştı: Tutuklandı

2023-08-18
En Son Haber
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice imitation via AI) to commit fraud, which is a harm to persons (victims of attempted financial fraud). The AI system's use directly contributed to the harm by enabling the impersonation. Therefore, this qualifies as an AI Incident under the definition of harm caused by AI system use.
Thumbnail Image

'Sahte Erdoğan' operasyonunda bir tutuklama

2023-08-18
birgun.net
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to generate a fake voice for fraudulent purposes, which directly caused harm through deception and attempted financial fraud. This fits the definition of an AI Incident because the AI system's use led directly to harm (fraud and deception). The involvement of AI in the voice imitation is clear and central to the incident.
Thumbnail Image

Erdoğan'ın sesini yapay zeka ile taklit ederek dolandırıcılık yapan kişi tanıdık çıktı

2023-08-19
birgun.net
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to generate a synthetic voice of President Erdoğan to impersonate him and attempt to defraud individuals. This is a direct use of an AI system leading to harm (fraud attempts), fulfilling the criteria for an AI Incident. The harm is related to financial fraud, which is a form of harm to persons and property. The AI system's role is pivotal as the fraud relies on the AI-generated voice to deceive victims. Therefore, this is classified as an AI Incident.
Thumbnail Image

'Sahte Erdoğan'la vurgun girişimi... Şüpheli tanıdık çıktı: Odatv, emniyet ifadesine ulaştı

2023-08-18
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI technology (voice-changing applications) to generate a synthetic voice of a public figure to deceive and attempt to defraud individuals. This misuse of AI directly leads to harm in the form of attempted fraud and potential financial loss, which qualifies as an AI Incident under the framework. The harm is realized in the form of attempted deception and the risk of financial harm, even if no money was ultimately obtained. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Erdoğan'ın sesiyle dolandırıclık yapan isim bakın kimin oğlu çıktı

2023-08-18
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system was used to mimic President Erdoğan's voice to deceive and defraud people, which is a direct use of AI leading to harm (financial loss) to individuals. This fits the definition of an AI Incident because the AI system's use directly caused harm through fraudulent activity. The involvement of the AI system is clear and central to the incident, and the harm is realized, not just potential.
Thumbnail Image

Erdoğan'ın sesini taklit ederek dolandırıcılık yapan kişi tutuklandı

2023-08-18
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice imitation via AI) to commit fraud, which is a violation of law and causes harm to individuals (financial harm and deception). The AI system's use directly led to harm through fraudulent activity. Therefore, this qualifies as an AI Incident under the definition of causing harm through the use of an AI system.
Thumbnail Image

Erdoğan'ın sesini taklit edip insanları dolandıran kişinin babası bakın kim çıktı!

2023-08-19
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI technology to mimic a person's voice to commit fraud, which is a direct violation of rights and causes harm to individuals targeted by the scam. The AI system's use in this fraudulent scheme directly led to an attempt to cause financial harm and deception, fulfilling the criteria for an AI Incident. The fact that no money was ultimately obtained does not negate the occurrence of harm or the direct involvement of AI in the fraudulent activity.
Thumbnail Image

Erdoğan'ın sesini taklit ederek dolandırıcılık yapan kişi tanıdık çıktı

2023-08-19
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The event explicitly states the use of AI to imitate the voice of President Erdoğan to deceive and attempt to defraud businesspeople. The AI system's use directly led to fraudulent attempts, which is a form of harm to persons and communities. The suspect's use of AI-generated voice recordings to mislead victims is a direct involvement of AI in causing harm. Even if no money was ultimately obtained, the attempt and the deception itself constitute harm. Hence, this is an AI Incident.
Thumbnail Image

Erdoğan'ın sesini taklit etti: Yapay zekâ sahtekârı tutuklandı

2023-08-19
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (voice synthesis) to impersonate a public figure for fraudulent purposes. The suspect was caught and admitted to attempting fraud but did not successfully obtain money through the AI-generated voice calls. Since no actual harm (financial loss) has been confirmed, the event is a credible AI Hazard, representing a plausible risk of harm from AI misuse. It is not Complementary Information because the main focus is the AI misuse attempt, nor is it Unrelated because AI is central to the event. Therefore, the classification is AI Hazard.
Thumbnail Image

Cumhurbaşkanı Erdoğan'ın sesini taklit eden kişi tutuklandı! Ses taklidiyle dolandırıcılık yapıyordu

2023-08-18
Ulusal Kanal
Why's our monitor labelling this an incident or hazard?
The event explicitly states that the suspect used AI to mimic the president's voice to defraud businesspeople and officials, which is a direct harm caused by the AI system's use. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of rights and harm to persons through fraudulent activity. Therefore, the classification is AI Incident.
Thumbnail Image

استخدم الذكاء الاصطناعي للاحتيال على رجال أعمال ومسؤولين في تركيا.. القبض على مقلد صوت أردوغان

2023-08-17
AlJadeed.tv
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to mimic a voice for fraudulent purposes, which directly led to harm through attempted fraud on individuals. This fits the definition of an AI Incident because the AI system's use directly caused harm (fraud attempt) to people. The involvement of AI in the malicious use of voice synthesis technology to deceive and defraud is clear and central to the event.
Thumbnail Image

تركيا.. اعتقال مقلد صوت أردوغان

2023-08-17
جريدة زمان التركية
Why's our monitor labelling this an incident or hazard?
The use of AI to imitate a public figure's voice for fraudulent purposes constitutes direct misuse of an AI system leading to harm (attempted fraud). Although the fraud was prevented by arrest, the event clearly involves an AI system's use causing or attempting to cause harm, fitting the definition of an AI Incident. The involvement of AI in the impersonation and the resulting criminal activity justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

شاهد: شاب يُقلّد صوت أردوغان ويوجّه الأوامر للمسؤولين! - وكالة أوقات الشام الإخبارية

2023-08-17
وكالة أوقات الشام الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a synthetic voice impersonating a public figure, which was used to attempt fraud. This misuse of AI directly led to an attempted harm (fraud) against individuals and government officials, constituting a violation of rights and potential harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm from malicious use of AI-generated voice imitation.
Thumbnail Image

المخابرات التركية توقف شخصا يقلد صوت أردوغان

2023-08-17
Sputnik Arabic (سبوتنيك عربي)
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake voice of a public figure, which was then used to deceive and defraud people, causing direct harm to those individuals. This meets the criteria of an AI Incident because the AI system's use directly led to harm through fraud and potential financial loss. The involvement of AI in voice imitation and the resulting criminal activity justifies classification as an AI Incident.
Thumbnail Image

الامن التركي يعتقل شخص يقلد صوت أردوغان بهدف الاحتيال

2023-08-17
تركيا الآن
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to mimic the voice of President Erdogan to commit fraud, which is a direct misuse of AI leading to harm (fraud and deception). This fits the definition of an AI Incident because the AI system's use directly led to an attempt to harm individuals through deception and fraud. The involvement of AI in the malicious use of voice synthesis technology for impersonation and fraud is clear and central to the event.
Thumbnail Image

تركيا: الاحتيال على مسؤولين كبار بواسطة الذكاء الاصطناعي

2023-08-17
الإمارات نيوز
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (voice imitation technology) to commit fraud, which is a direct harm to people (senior officials and businessmen) through deception. The AI system's use led directly to an attempt to cause harm, fulfilling the criteria for an AI Incident. The arrest and investigation confirm the harm was realized or at least attempted, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

اعتقال تركي احتال على رجال أعمال بتقليد صوت الرئيس أردوغان

2023-08-18
وكالة نيو ترك بوست الاخبارية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a synthetic voice mimicking a public figure, which was then used to commit fraud against individuals. This constitutes direct harm to people through deception and potential financial loss, meeting the criteria for an AI Incident. The AI system's use was central to the harm caused, as it enabled the impersonation and subsequent fraud.
Thumbnail Image

القبض على مواطن تركي يحتال على المسؤولين من خلال تقليد صوت أردوغان.. تفاصيل مثيرة

2023-08-17
تركيا الآن
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice to impersonate a public figure and deceive victims constitutes direct involvement of an AI system in causing harm. The harm includes financial fraud and unauthorized access to sensitive information, which are violations of rights and cause significant harm to individuals and institutions. Therefore, this qualifies as an AI Incident.
Thumbnail Image

تركيا.. اعتقال "مقلد صوت أردوغان"

2023-08-17
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to mimic the voice of President Erdogan to deceive and attempt fraud, which constitutes a direct misuse of an AI system causing harm. The harm includes criminal fraud attempts against individuals, which falls under violations of law and rights. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

باستخدام الذكاء الاصطناعي.. القبض على تركي محتال يقلد صوت اردوغان

2023-08-17
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (voice imitation technology) in a criminal fraud attempt, which directly led to harm through attempted financial fraud. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident as it caused harm through malicious use. Therefore, this is classified as an AI Incident.
Thumbnail Image

بعد أردوغان.. هل أصبح الذكاء الصناعي خطرا على قادة العالم؟

2023-08-19
tayyar.org
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems for voice imitation to commit fraud, which directly caused financial harm and deception, qualifying as an AI Incident under the framework. The involvement of AI in these scams is clear and the harms (financial loss, deception) have materialized. While it also discusses potential future risks and calls for governance, the main event is the realized fraud using AI voice synthesis, which meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

توقيف "مقلد صوت أردوغان" - Lebanese Forces Official Website

2023-08-17
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI technology to mimic President Erdogan's voice for fraudulent calls targeting high-profile individuals. This constitutes direct use of an AI system leading to harm (fraud attempts), fitting the definition of an AI Incident. The harm includes violations of trust, potential financial loss, and risks to individuals' security. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Во Турција приведено лице кое го имитирало гласот на Ердоган со помош на вештачка интелигенција

2023-08-17
TV21.mk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice imitation via AI applications) to commit fraud, which directly leads to harm in the form of deception and potential financial or reputational damage to individuals targeted. This constitutes an AI Incident because the AI system's use directly caused harm through fraudulent activity.
Thumbnail Image

Уапсено лице кое со помош на вештачка интелегенција го имитирало гласот на Ердоган, се јавувал кај бизнисмени и функционери - Слободен печат

2023-08-17
Слободен печат
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (voice imitation via AI applications) to commit fraud by impersonating a public figure. This use of AI directly led to harm in the form of attempted scams targeting businessmen and government officials. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through fraudulent activity.
Thumbnail Image

ТУРЦИЈА: Приведено лице кое го имитирало гласот на Ердоган со помош на вештачка интелигенција

2023-08-17
Lider.mk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice imitation applications) to commit fraud by impersonating a high-profile individual, which directly caused harm to people (businessmen and officials) through deception and potential financial or reputational damage. This fits the definition of an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

ТУРЦИЈА НА НОЗЕ: Маж го имитирал гласот на Ердоган со вештачка интелигенција

2023-08-17
vecer.press
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice imitation via AI applications) to commit fraud, which constitutes harm to individuals (businessmen and officials) through deception and potential financial or reputational damage. The AI system's use directly led to these harms, qualifying this as an AI Incident.
Thumbnail Image

Siber saldırılara göz açtırılmıyor... 252 bin 883 adet zararlı internet adresine 108 milyon erişim engellendi

2023-08-20
Hürriyet
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of AI-generated voice imitation used in fraud attempts and AI technology used to detect phishing domains. However, the main focus is on the cybersecurity response, prevention, and mitigation efforts, including blocking malicious addresses and raising awareness. There is no description of an AI system causing direct or indirect harm, nor is there a plausible future harm scenario presented as the main event. The article serves to inform about ongoing cybersecurity measures and improvements, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Yapay zekâ dolandırıcılarına büyük darbe

2023-08-21
Hürriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to mimic voices for fraudulent purposes, which is an AI system's use leading to harm (fraud and deception). The harm is realized as these scams target individuals and public officials, constituting harm to communities and individuals. The large number of blocked malicious domains and access attempts further supports the presence of an active AI-enabled harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in fraud attempts.
Thumbnail Image

'Olta'ya gelmeyin

2023-08-20
Ak�am
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate fake voices (deepfake audio) and automated phishing emails to deceive victims, which has directly led to harm through fraud and data breaches. The involvement of AI in the malicious use of deepfake and automated content generation is clear, and the harms include violations of privacy, potential financial loss, and deception of individuals. These harms fall under the definitions of AI Incident, as the AI system's use has directly led to realized harm.
Thumbnail Image

253 bine yakın zararlı internet adresine erişim engellendi - Bloomberg HT

2023-08-20
BloombergHT
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI technology as part of cybersecurity measures to detect and prevent fraud and phishing attacks. However, it does not report any realized harm caused by AI systems themselves, nor does it describe a specific incident where AI caused harm. Instead, it details responses and improvements in cybersecurity infrastructure and coordination. Therefore, this is Complementary Information as it provides context and updates on AI-related cybersecurity activities and responses, rather than reporting a new AI Incident or AI Hazard.