AI-Generated Fake Messages Used in Extortion Scheme

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Eskişehir, businessman Hikmet Öztürk was accused by S.M. of sending threatening WhatsApp messages. Öztürk claims the messages were AI-generated fakes and that S.M. demanded 100,000 lira to drop the case. This incident highlights the use of AI in creating false evidence for extortion.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the actual use of an AI system to fabricate WhatsApp conversations, falsely accuse and threaten a victim, and extort funds. This misuse directly caused psychological harm, harassment, and fraud, meeting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Other

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Eskişehir'de bir iş insanı yapay zekayla dolandırıldı

2024-09-27
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article describes the actual use of an AI system to fabricate WhatsApp conversations, falsely accuse and threaten a victim, and extort funds. This misuse directly caused psychological harm, harassment, and fraud, meeting the definition of an AI Incident.
Thumbnail Image

İş insanına tehdit ve hakaret soruşturmasında yapay zeka destekli "sahte yazışma" iddiası

2024-09-27
T24
Why's our monitor labelling this an incident or hazard?
Here, generative AI was misused to fabricate false chat logs as legal evidence, directly threatening an individual’s due-process rights and misleading law enforcement. The AI’s role in producing fake messages has already led to harm (a wrongful investigation and potential defamation), meeting the criteria for an AI Incident.
Thumbnail Image

İş İnsanına Yapay Zeka ile Sahte Yazışma İddiasıyla Suç Duyurusu

2024-09-27
Haberler
Why's our monitor labelling this an incident or hazard?
The article describes how AI-supported tools were used to fabricate WhatsApp chat screenshots that were submitted as evidence in a legal case, falsely implicating the accused in threats and insults. The AI system's use in generating these fake messages directly caused harm by enabling false accusations and legal actions, which is a violation of rights and harms the individual's reputation and legal position. The harm is realized, not just potential, and the AI system's role is pivotal in creating the false evidence. Hence, this is an AI Incident.
Thumbnail Image

İş İnsanı Hikmet Öztürk'ten Sahte Mesaj İddiası

2024-09-27
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the WhatsApp chat screenshots were produced using AI-supported internet platforms to fabricate messages. These fake messages were submitted as evidence in a legal case, causing harm to the accused person by implicating them falsely. This constitutes a violation of rights and harm to the individual, which fits the definition of an AI Incident. The AI system's role in generating the fake evidence is pivotal to the harm caused. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

İş insanına tehdit ve hakaret soruşturmasında 'sahte yazışma' iddiasıyla suç duyurusu - Eskişehir Haberleri

2024-09-27
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI was used to create fake WhatsApp chat screenshots presented as evidence in a legal investigation. The AI system's use in fabricating false evidence has directly caused harm by misleading legal processes and potentially violating the rights of the accused. This fits the definition of an AI Incident because the AI system's use has directly led to harm related to legal rights and justice processes.
Thumbnail Image

İş insanına tehdit ve hakaret soruşturmasında 'sahte yazışma' iddiasıyla suç duyurusu

2024-09-27
CNN Türk
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fake WhatsApp chat screenshots used as false evidence in a legal case. This use of AI directly led to harm in the form of reputational damage, legal threats, and violation of rights. The AI-generated content was maliciously used to fabricate evidence, which is a clear harm to the individuals involved and the legal process. The article documents the incident and the harm caused, not just a potential risk, so it meets the criteria for an AI Incident rather than a hazard or complementary information.