AI-Generated Deepfake Video Targets Egyptian Public Figure

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Egyptian media personality Amira Hassan, known as "Amira El Dahab," became the victim of a reputational attack after a deepfake video fabricated with AI technology was circulated online. She filed legal complaints against unknown perpetrators, emphasizing the video's falsity and calling for strict legal action to address AI-driven defamation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to create manipulated video content (deepfake) that harms the reputation of a person, which constitutes a violation of rights and harm to the individual. Since the AI-generated content has been circulated and caused reputational harm, this qualifies as an AI Incident. The involvement of AI in creating harmful fabricated content directly led to harm to the individual's reputation, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

أميرة الذهب ترد على تسريب فيديو مثير للجدل مع خليجي

2025-11-16
Dostor
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content (deepfake video) which is used to cause reputational harm, a form of harm to the individual. However, the article mainly reports on the denial of the video's authenticity and the legal measures being taken, without confirming that the harm has materialized or detailing the impact. Therefore, this is best classified as Complementary Information, as it provides context and response to a potential AI-related harm but does not document a confirmed AI Incident or a plausible future hazard on its own.
Thumbnail Image

Ai لتشويه سمعتي.. أميرة الذهب ترد على تداول فيديو غير لائق لها مع خليجي

2025-11-16
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create manipulated video content (deepfake) that harms the reputation of a person, which constitutes a violation of rights and harm to the individual. Since the AI-generated content has been circulated and caused reputational harm, this qualifies as an AI Incident. The involvement of AI in creating harmful fabricated content directly led to harm to the individual's reputation, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

فيديو أميرة الذهب مع خليجي.. القصة الكاملة للمقطع المفبرك ورد المهندسة أميرة حسان

2025-11-16
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create fabricated video content that harms the reputation of a person, which constitutes a violation of rights and harm to the individual. Since the AI-generated video has been disseminated and caused reputational harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights (reputation, privacy).
Thumbnail Image

فيديو أميرة الذهب والخليجي .. رد عاجل منها بعد تصدره التريند

2025-11-16
صدى البلد
Why's our monitor labelling this an incident or hazard?
The video is described as AI-fabricated, implying the use of AI systems to generate false content. The harm is reputational damage to the individual, which is a form of harm to communities or individuals. Since the AI-generated video has already been disseminated and caused harm, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

أميرة الذهب تنفي فيديو مسيئا: محتوى مفبرك لتشويه سمعتي

2025-11-16
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI techniques to fabricate a video that harms the reputation of a person, which is a direct harm caused by the malicious use of an AI system. The harm is realized (not just potential), and legal actions have been taken in response. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (reputational harm and violation of rights).
Thumbnail Image

مشاهدة فيديو اميرة الدهب تيرابوكس تتصدر.. ما القصة الكاملة وراء المقطع الشهير؟

2025-11-16
الصباح العربي
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to fabricate a video by replacing the person's face with another's, which is a clear use of AI systems (deepfake technology). The harm is realized as it damages the reputation and image of the individual, constituting a violation of personal rights and harm to community trust. The victim has taken legal steps, indicating the seriousness of the harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malicious use.
Thumbnail Image

بعد صمت طويل: أول رد من أميرة الذهب يكشف حقيقة الفيديو المسرب لها مع شاب خليجي.. شاهد ماذا قالت؟

2025-11-16
الصباح العربي
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create a fake video that damages the reputation of Amira Al-Dhahab, which is a clear harm to an individual's rights and reputation. The harm is realized as the video has been circulated and caused reputational damage, prompting legal action. The AI system's use in generating the deepfake video directly led to this harm, fitting the definition of an AI Incident. The public figure's response and legal measures do not negate the fact that harm has occurred due to AI misuse.
Thumbnail Image

هل عثر المستخدمون على فيديو أميرة الدهب كامل تليجرام؟ أميرة الذهب تؤكد أنه ملفق ولا وجود له

2025-11-16
الصباح العربي
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video that harms a person's reputation. The harm (violation of rights and reputational damage) has already occurred due to the spread of the AI-generated video. The use of AI in fabricating the video and the resulting harm meets the criteria for an AI Incident, as the AI system's use directly led to harm. The legal response and public statements are complementary information but do not change the primary classification.
Thumbnail Image

من هو زوج أميرة الذهب وآخر تطورات موقفها القانوني بعد الفيديو المفبرك

2025-11-17
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-based deepfake technology to produce a fake video that damages the individual's reputation, which is a clear violation of personal rights and can be considered harm under the AI Incident definition. The harm has already occurred (the video was disseminated), and the AI system's role is pivotal in causing this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الجديد شديد .. فيديو أميرة الذهب مع الخليجي يتصدر المشهد

2025-11-16
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create fabricated video content that harms the reputation and social standing of a person. The harm is realized and ongoing, as the video has spread widely and caused significant distress and reputational damage. The AI system's use is malicious and directly linked to the harm described, fulfilling the criteria for an AI Incident under the framework. The event is not merely a potential risk or a complementary update but a concrete case of AI-enabled harm.
Thumbnail Image

بعد تداول فيديو أميرة الذهب.. اميرة حسان ترفع بلاغًا ضد مجهولين بسبب فيديو مفبرك بالذكاء الاصطناعي

2025-11-16
اخبار اليمن الان
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to fabricate a video falsely attributed to the individual, causing reputational damage. This is a direct harm to the person's rights and reputation, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The involvement of AI in creating the harmful content and the resulting legal complaint confirm this classification.
Thumbnail Image

أميرة الدهب تبلغ مباحث الإنترنت عن فيديو مفبرك بالذكاء الاصطناعي

2025-11-16
أخبار العصر
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video is fabricated using AI techniques and that it has been circulated on social media, causing harm to the individual's reputation. This is a direct harm caused by the use of an AI system (deepfake or similar generative AI technology). The harm is realized, not just potential, and involves violation of rights and harm to the individual, meeting the criteria for an AI Incident.
Thumbnail Image

أميرة الذهب ترد على فيديو تشويه سمعتها: مفبرك بـAI.. ولا تقلقوا نحن في

2025-11-16
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
An AI system was used to fabricate video content that harms the reputation of an individual, which constitutes a violation of personal rights and potentially defamation. The harm (damage to reputation) has already occurred due to the AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (reputational harm).
Thumbnail Image

تقدمت ببلاغ إلى المباحث.. أميرة الذهب: الفيديو المسيء مفبرك بالذكاء الاصطناعي

2025-11-17
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated video content that causes reputational harm, which is a form of harm to the individual (harm to person or community reputation). The harm has already occurred as the video is circulating and damaging her reputation. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (defamation and reputational damage).
Thumbnail Image

بسبب مقطع فيديو.. أميرة الدهب تتصدر "التريند" (تفاصيل) | المصري اليوم

2025-11-17
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a fabricated video that harms a person's reputation, which is a violation of personal rights. The AI system's role in generating the fake content is central to the harm caused. The article explicitly states the video is AI-generated and that legal measures are being taken, confirming the harm has occurred. Hence, it meets the criteria for an AI Incident due to realized harm linked to AI use.
Thumbnail Image

مقطع أميرة الذهب يحدث ضجة على الإنترنت.. بلاغ رسمي وتحقيق عاجل ضد المروجين

2025-11-17
الصباح العربي
Why's our monitor labelling this an incident or hazard?
The videos are explicitly stated to be produced using advanced forgery techniques, which reasonably implies AI-based deepfake technology. The harm is reputational damage and defamation, which falls under harm to communities or individuals. The AI system's use in generating the content directly leads to this harm. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

فيديو أميرة الذهب.. حقيقة ام ذكاء اصطناعي؟

2025-11-17
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a fabricated video (deepfake) that falsely portrays a person, causing reputational harm and a violation of rights. The harm is realized and direct, as the video has spread widely and caused social controversy. The use of AI in fabricating the video is central to the incident. Hence, it meets the criteria for an AI Incident due to violation of rights and harm to the individual caused by AI-generated content.
Thumbnail Image

أميرة الذهب تتقدم ببلاغ بعد تداول فيديو مفبرك لها باستخدام الذكاء الاصطناعي

2025-11-17
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content used to damage the reputation of Amira El-Dahab, which is a direct harm to her personal and professional rights. The use of AI in creating harmful synthetic media that leads to reputational damage fits the definition of an AI Incident, as it involves violations of rights and harm to a person caused by the use of an AI system.
Thumbnail Image

فيديو أميرة الذهب مع الخليجي مقطع نصف ساعة زلزل سوشيال مصر

2025-11-17
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a fabricated video (deepfake) that has been disseminated widely, causing reputational damage and social harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (reputational harm) and harm to communities (misinformation and social disruption). The article confirms the video is AI-generated and fake, and the harm is occurring, not just potential. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

عاجل: أميرة الذهب تفضح المؤامرة الخطيرة ضدها بالذكاء الاصطناعي... والقانون يتدخل!

2025-11-17
يمن برس
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technology) to produce harmful fake videos that directly damage the reputation and livelihoods of targeted women. This is a clear case of an AI Incident because the AI system's use has directly led to harm to individuals and communities (reputational harm, economic harm, psychological harm). The article describes actual harm occurring, not just potential harm, and legal responses are a reaction to this harm. Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

+21.. مقطع فيديو أميرة الذهب ورجل خليجي كامل

2025-11-17
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to create a fabricated video that harms a person's reputation. This constitutes a violation of rights and harm to the individual and community by spreading false and damaging content. Since the harm has already occurred and legal actions are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

أميرة الذهب تشيد بسرعة اتخاذ الداخلية للإجراءات اللازمة حول الفيديو المفبرك

2025-11-18
Dostor
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated videos that have caused reputational harm to a person, which is a clear harm to communities and a violation of rights. The AI system's use directly led to this harm. The legal response is a complementary action but does not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident.
Thumbnail Image

فيديو أميرة الدهب مع خليجي .. تريند جديد علي كل محركات البحث وصاحبة الفيديو توضح الحقيقة | حوادث | الزمان

2025-11-18
elzmannews.com
Why's our monitor labelling this an incident or hazard?
The video is described as a deepfake, which is an AI-generated manipulated video. The use of AI to create such fabricated content that harms a person's reputation and causes social and legal consequences fits the definition of an AI Incident, as it involves harm to the individual (reputational harm and potential violation of rights) directly linked to the AI system's use (deepfake generation).
Thumbnail Image

بعد نشر فيديوهات مفبركة.. أميرة الذهب ترد بحزم وتتقدم بالشكر للأجهزة الأمنية

2025-11-19
الصباح العربي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fabricated videos that damage the reputation of Amira El-Dahab. This constitutes a direct harm to the individual (harm to reputation and potential violation of personal rights). The involvement of AI in generating the fake content and the resulting legal actions confirm that this is an AI Incident under the framework, as the AI system's use has directly led to harm.