AI-Generated Deepfake Video Used in Extortion Attempt Against Arab Artist

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Iraqi-Tunisian artist Zahraa Ben Mime was targeted in an extortion attempt involving an AI-generated fake explicit video. An unknown individual demanded $50,000 to prevent the video's release, despite no such real content existing. The incident highlights the growing misuse of AI for digital extortion and privacy violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI technology was used to create a fabricated video for extortion purposes. The artist was threatened with harm (release of fake content) to coerce payment, which is a direct harm to her privacy and security. The AI system's malicious use is central to the incident. Even though the artist has not paid and no video exists, the extortion attempt itself is a realized harm event involving AI misuse. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafetyTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

مجهول يبتز فنانة عربية بفيديو خاص.. وردّ فعلها يشعل مواقع التواصل

2026-01-02
24.ae
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to create a fabricated video for extortion purposes. The artist was threatened with harm (release of fake content) to coerce payment, which is a direct harm to her privacy and security. The AI system's malicious use is central to the incident. Even though the artist has not paid and no video exists, the extortion attempt itself is a realized harm event involving AI misuse. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"فيديو فاضح" بـ50 ألف دولار.. فنانة شابة تتعرض للابتزاز

2026-01-03
العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a fabricated explicit video (deepfake) used for extortion, which is a direct misuse of AI technology causing harm to the victim's privacy and digital security. The threat and attempted extortion represent a violation of personal rights and a clear harm linked to the AI system's use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of privacy and threat to personal security).
Thumbnail Image

فنانة عربية تكشف تعرضها للابتزاز.. فيديو

2026-01-03
slaati.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the extortion attempt involved a fabricated video created using AI techniques, indicating the use of an AI system in the malicious act. The harm is psychological and reputational, stemming from the AI-generated fake video used to threaten the artist. This fits the definition of an AI Incident because the AI system's use directly led to a harm scenario (attempted extortion and threat), even though the content is fabricated and no real video exists. The event is not merely a potential risk but an actual incident of harm involving AI misuse.
Thumbnail Image

فنانة عربية تتعرض لابتزاز فيديو فاضح مزيف بالذكاء الاصطناعي

2026-01-03
عكاظ
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create a fabricated explicit video (deepfake) used for extortion, which is a direct harm to the individual's privacy and security. The AI system's malicious use has led to an incident of attempted financial harm and psychological distress. This fits the definition of an AI Incident as the AI system's use has directly led to harm (attempted extortion and privacy violation).
Thumbnail Image

تعرضت لابتزاز إلكتروني مقابل 50 ألف دولار.. من هي الفنانة زهراء بن ميم؟ - الأسبوع

2026-01-03
الأسبوع
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a fabricated video used in an extortion attempt against the artist. The AI system's use directly led to a violation of the victim's privacy and a threat to her security, which is a clear harm to a person. The event involves the malicious use of AI-generated content, fulfilling the criteria for an AI Incident as per the definitions provided.
Thumbnail Image

زهراء بن ميم تتعرض للابتزاز.. فيديو مفبرك بـ50 ألف دولار

2026-01-03
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI techniques to fabricate a video falsely depicting the individual, which was then used for extortion. This constitutes direct harm through malicious use of AI-generated content, fulfilling the criteria for an AI Incident. The harm includes violation of privacy, reputational damage, and attempted financial extortion, all linked directly to the AI system's misuse. Therefore, this is classified as an AI Incident.
Thumbnail Image

فنانة زهراء بن ميم تتعرض لابتزاز إلكتروني بـ 50 ألف دولار

2026-01-03
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a fake compromising video, which was then used to threaten and extort the victim. This is a clear case where the AI system's outputs have directly led to harm (psychological, privacy violation, and potential financial harm). Therefore, it meets the criteria for an AI Incident as the AI system's use has directly caused harm to a person.
Thumbnail Image

فنانة عراقية تتعرض للابتزاز بـ"فيديو فاضح"

2026-01-03
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions that the threatening video was created using AI, indicating the involvement of an AI system in generating harmful content. The use of AI-generated fake videos for blackmail is a direct violation of the individual's rights and privacy, which fits the definition of an AI Incident under violations of human rights and harm to individuals. The harm is realized as the victim is being extorted and threatened, even though the video is fabricated. Therefore, this qualifies as an AI Incident.
Thumbnail Image

مجهول يبتز فنانة عربية بفيديو خاص

2026-01-04
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the blackmailer used AI techniques to fabricate a video, which is a direct use of an AI system leading to harm (attempted extortion and violation of privacy). This fits the definition of an AI Incident because the AI system's use has directly led to a significant harm to an individual (harm to personal security and privacy). The harm is realized in the form of attempted blackmail and threat, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

ادفعي بدل ما افضحك.. قصة الفيديو المفبرك لـ الفنانة زهراء بن ميم

2026-01-04
sabaharabi.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI techniques to fabricate a video that does not reflect reality, which is then used to extort money from the artist. This is a direct harm caused by the misuse of an AI system (deepfake generation), fulfilling the criteria of an AI Incident due to violation of rights and harm to the individual. The harm is realized as the artist is being threatened and extorted based on the AI-generated content.