AI-Generated Fraudulent Messages Target Citizens Ahead of Holiday

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey's Dezenformasyonla Mücadele Merkezi (DMM) warned citizens about increased AI-generated fraudulent messages on social media and messaging apps before the holiday. Scammers use AI to impersonate trusted contacts or institutions, directing users to fake links to steal personal and financial information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI-generated content in fraudulent messages that lead to harm by attempting to steal personal and financial data from individuals. This constitutes harm to people through deception and potential financial injury, directly linked to the use of AI systems generating such content. Therefore, this qualifies as an AI Incident because the AI system's use in generating deceptive content has directly led to harm or attempted harm.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

İletişim Başkanlığı uyardı: Bu tür içeriklere erişim sağlamayın

2026-03-18
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated content in fraudulent messages that lead to harm by attempting to steal personal and financial data from individuals. This constitutes harm to people through deception and potential financial injury, directly linked to the use of AI systems generating such content. Therefore, this qualifies as an AI Incident because the AI system's use in generating deceptive content has directly led to harm or attempted harm.
Thumbnail Image

DMM'den yapay zeka destekli dolandırıcılık girişimlerine karşı uyarı Açıklaması

2026-03-18
Haberler
Why's our monitor labelling this an incident or hazard?
The use of AI-generated content in fraudulent messages directly leads to harm by attempting to steal personal and financial data from users. This is a clear case where the AI system's use has directly led to an incident of harm (fraud attempts). Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated deceptive content.
Thumbnail Image

DMM'den Bayram Öncesi Dolandırıcılık Uyarısı - Son Dakika

2026-03-18
Son Dakika
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI-supported content used in fraudulent messages. However, it only reports a warning about potential scams and advises caution to prevent harm. There is no indication that harm has already occurred or that a specific AI incident has taken place. Therefore, this event represents a plausible risk of harm due to AI misuse, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DMM'den bayram öncesi 'yapay zeka' destekli dolandırıcılık uyarısı | Gündem Haberleri

2026-03-18
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it explicitly mentions AI-supported content used in fraudulent messages. The harm (fraud/scam) is not reported as realized but is a credible potential harm that could plausibly occur due to the AI-generated deceptive content. Since the event is a warning about potential AI-enabled fraud attempts rather than a report of actual harm, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the plausible risk of harm from AI misuse in scams, not on responses or ecosystem updates.
Thumbnail Image

Yapay zeka destekli dolandırıcılık mesajlarına karşı kritik uyarı!

2026-03-18
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported fake messages used in fraud attempts, which have directly led to harm by tricking people into revealing personal and financial information. This fits the definition of an AI Incident because the AI system's use in generating deceptive messages has directly caused harm to individuals. The warnings and advice are responses to an ongoing harm, not just potential harm, confirming this is an incident rather than a hazard or complementary information.
Thumbnail Image

DMM'den bayram öncesi "yapay zeka" destekli dolandırıcılık uyarısı

2026-03-18
TRT haber
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-supported content in fraudulent messages that impersonate trusted entities to deceive users, leading to attempts to steal personal and financial information. This constitutes a violation of rights and harm to individuals through deception and fraud. Since the AI system's use directly contributes to the harm by enabling more convincing scams, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Yapay zeka destekli dolandırıcılık girişimlerine karşı uyarı

2026-03-18
Merhaba Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported content used in messages that lead to fraud attempts, which directly harms individuals by attempting to steal personal and financial information. Since the harm is occurring or actively attempted, and AI systems are involved in generating deceptive content, this qualifies as an AI Incident under the definition of harm to persons or groups through AI misuse.
Thumbnail Image

DMM'den, 'yapay zeka' uyarısı: Bayramda dolandırıcıların tuzağına düşmeyin

2026-03-18
Sputnik Türkiye
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deceptive content for fraudulent purposes, which could plausibly lead to harm such as financial loss or privacy violations. Since the article focuses on a warning about potential fraud attempts using AI-generated messages rather than describing an actual realized harm, it fits the definition of an AI Hazard. The AI system's involvement is in the use of AI-generated content to facilitate scams, posing a credible risk of harm, but no direct harm is reported yet.