AI-Generated Deepfakes Target Journalists, Especially Women, Worldwide

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Reporters Without Borders (RSF) documented at least 100 cases across 27 countries where AI-generated deepfake videos and audio targeted journalists between December 2023 and December 2025. These deepfakes caused defamation, harassment, and physical threats, disproportionately affecting women journalists (74% of cases), and fueling global disinformation and gender-based attacks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used to generate deepfake videos and audio impersonating journalists, which have directly caused harm such as harassment, disinformation, threats to safety, and reputational damage. The harms include violations of rights (e.g., personal dignity, safety), harm to communities (through misinformation and political manipulation), and psychological harm to individuals. The report documents actual incidents and ongoing impacts, not just potential risks, fulfilling the criteria for an AI Incident. The AI system's use in creating and spreading deepfakes is central to the harm described.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenWorkers

Harm types
ReputationalPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Rsf: Deepfake Saldırılarının Yüzde 74'ü Kadın Gazetecileri Hedef Alıyor

2026-02-09
Haberler
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos and audio impersonating journalists, which have directly caused harm such as harassment, disinformation, threats to safety, and reputational damage. The harms include violations of rights (e.g., personal dignity, safety), harm to communities (through misinformation and political manipulation), and psychological harm to individuals. The report documents actual incidents and ongoing impacts, not just potential risks, fulfilling the criteria for an AI Incident. The AI system's use in creating and spreading deepfakes is central to the harm described.
Thumbnail Image

RSF'den 100 "deepfake" örneği: Tehditler konusunda gazetecilere uyarı!

2026-02-09
birgun.net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake systems to create manipulated videos and audio that have caused direct harm to journalists, including defamation, harassment, and physical danger. These harms fall under violations of human rights and harm to communities. The involvement of AI systems in generating deepfakes is clear and central to the harms described. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

RSF: Deepfake saldırılarının yüzde 74'ü kadın gazetecileri hedef alıyor

2026-02-09
T24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos and audio impersonations of journalists, which have directly led to harms such as harassment, disinformation, gender-based violence, and threats to physical safety. The article documents actual cases and impacts, not just potential risks. The harms include violations of human rights and harm to communities through misinformation and targeted attacks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

RSF'den Deepfake saldırılarına ilişkin açıklama: Gazetecilere yönelik küresel bir tehdit haline geldi - Evrensel

2026-02-09
Yeni Evrensel Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create synthetic media that impersonates journalists, causing direct harm such as harassment, defamation, and threats to physical safety, as well as harm to communities through misinformation and political destabilization. The harms are realized and documented, including specific cases of journalists affected and the societal impact described. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

RSF: Deepfake videolar gazetecileri tehdit ediyor

2026-02-09
Bianet - Bagimsiz Iletisim Agi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to generate synthetic videos and audio that impersonate real journalists. These AI-generated deepfakes have directly led to harms such as defamation, threats to personal safety, misinformation, and disruption of public trust in information sources. The harms are realized and documented, including legal actions and social consequences. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant harm to individuals and communities.