AI-Generated Deepfakes Used in Disinformation Campaigns Targeting Turkey

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey's Directorate of Communications' Disinformation Combat Center warned of a surge in AI-generated deepfake videos, images, and audio used in disinformation campaigns amid regional tensions. These manipulative contents, including a provocative video targeting President Erdoğan, threaten national security and social unity, prompting official advisories for public vigilance.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI technologies to create deepfake visual, audio, and video content for disinformation purposes, which is a clear involvement of AI systems. Although no direct harm is reported as having occurred, the warning about increased disinformation activities and their potential to disrupt national security and social cohesion indicates a plausible risk of harm. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet directly caused it. The event is not a Complementary Information piece because it focuses on the warning about potential harm rather than updates or responses to past incidents.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
Public interestReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

DMM'den yapay zekalı dezenformasyon faaliyetlerine karşı sağduyu çağrısı

2026-03-25
Haber7.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to create deepfake visual, audio, and video content for disinformation purposes, which is a clear involvement of AI systems. Although no direct harm is reported as having occurred, the warning about increased disinformation activities and their potential to disrupt national security and social cohesion indicates a plausible risk of harm. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet directly caused it. The event is not a Complementary Information piece because it focuses on the warning about potential harm rather than updates or responses to past incidents.
Thumbnail Image

Yapay Zeka destekli sahte içeriklere dikkat! DMM uyardı

2026-03-25
Milliyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies being used to create deepfake content that is actively disseminated to mislead the public and manipulate social sensitivities, which harms communities and national security. This meets the definition of an AI Incident as the AI system's use has directly led to harm (harm to communities and societal stability). The warning and advice from the Disinformation Combat Center further confirm the realized harm rather than just a potential risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

DMM'den birlik ve beraberliği hedef alan dezenformasyon faaliyetlerine karşı uyarı

2026-03-25
Milliyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies (deepfake generation) being used to produce false content aimed at manipulating public perception and causing social harm. While the disinformation activities are ongoing, the article frames the issue as a warning and a call for vigilance rather than reporting a specific incident where harm has materialized. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to harm to communities and national security through AI-generated disinformation, but no concrete incident of harm is detailed.
Thumbnail Image

DMM'den uyarı: Yapay zeka destekli sahte içeriklere itibar etmeyin - ensonhaber.com

2026-03-25
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies being used to create disinformation content (deepfakes) that manipulate public opinion and societal sensitivities, which aligns with the definition of an AI system's use potentially leading to harm. The harm described is disruption to national security and social unity, which falls under harm to communities and possibly national security interests. Since the article focuses on warning about the increase and potential impact of such AI-generated disinformation rather than reporting an actual realized harm event, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update, so it is not Complementary Information, and it is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

DMM, yapay zekayla oluşturulan görüntülere karşı vatandaşları uyardı

2026-03-25
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake content (deepfakes) being used to spread disinformation and manipulate public perception, which is a recognized harm to communities and societal trust. However, the article does not report that such harm has already occurred or is ongoing; rather, it is a cautionary advisory about the plausible risk of such harms. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible potential for AI misuse leading to harm, but does not describe a realized incident.
Thumbnail Image

Yapay Zeka destekli sahte içeriklere ilişkin DMM'den uyarı

2026-03-25
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies used to produce deepfake content that could manipulate public opinion and cause societal harm. This fits the definition of an AI Hazard, as it plausibly could lead to harm (psychological, social, and national security-related). Since no specific realized harm or incident is described, and the focus is on warning and raising awareness about potential misuse, the classification as AI Hazard is appropriate.
Thumbnail Image

DMM: Deepfake içeriklerle algı operasyonları yürütülüyor | Gündem Haberleri

2026-03-25
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies used to create deepfake content for disinformation operations that manipulate societal sensitivities and threaten national security and social unity. This constitutes harm to communities through misinformation and manipulation, fitting the definition of an AI Incident where AI use has directly led to harm.
Thumbnail Image

DMM'den milli güvenlik uyarısı: Yapay zekayla algı operasyonu! Sahte içeriklere karşı dikkat

2026-03-25
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technologies) being used to produce false content for psychological operations and disinformation. While the harm is not described as having already materialized, the use of AI-generated disinformation targeting national security and social unity plausibly could lead to significant harm to communities and national security. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident, but no direct harm is reported yet.
Thumbnail Image

Dezenformasyonla Mücadele Merkezi'nden deepfake ve sahte içeriklere karşı uyarı

2026-03-25
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies generating deepfake visual, audio, and video content used in disinformation campaigns. However, it focuses on the potential for harm and the need for vigilance rather than describing a realized harm or incident. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to harm (psychological operations, social disruption, threats to national security), but no direct or indirect harm has been reported as having occurred yet.
Thumbnail Image

İletişim Başkanlığından "dezenformasyon" uyarısı

2026-03-25
TRT haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to produce deepfake content used in disinformation campaigns that could manipulate societal sensitivities and threaten national security and social peace. However, it does not describe a realized harm or a specific incident where such AI-generated content has caused direct or indirect harm. Instead, it serves as a caution about the plausible future risk of AI-enabled disinformation causing harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities and national security if such disinformation spreads and influences public opinion or social stability.
Thumbnail Image

DMM: Deepfake içeriklerle algı operasyonları yürütülüyor

2026-03-25
TRT haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake visual, audio, and video content to carry out disinformation operations that manipulate societal sensitivities and threaten national security and social unity. This manipulation is an ongoing harm to communities and social cohesion, fitting the definition of an AI Incident. The AI system's use in generating deepfakes directly leads to the harm described, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yapay zekayla oluşturulan skandal video tepki çekmişti! DDM'den dezenformasyon uyarısı

2026-03-25
Türkiye
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to create deepfake videos that spread false and provocative content, which has already caused harm by manipulating public perception and threatening national security and social unity. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and a violation of societal trust. The official warning and advice are responses to an ongoing incident rather than a future risk or general information, confirming the classification as an AI Incident.
Thumbnail Image

DMM: Deepfake ve manipülatif içeriklere karşı dikkat

2026-03-25
Aydinses
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies used to create deepfake content that is being deployed to manipulate public opinion and conduct perception operations, which is a direct use of AI systems causing harm to communities through misinformation and manipulation. Since the harm is occurring (increased disinformation and manipulation), this qualifies as an AI Incident under the framework, specifically harm to communities. The advisory and warnings are complementary but the core event is the active use of AI-generated disinformation causing harm.