AI-Generated Deepfake of Bella Hadid Spreads Misinformation on Israel-Hamas Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely depicting Palestinian-American model Bella Hadid expressing support for Israel went viral, amassing around 30 million views on X (formerly Twitter). The manipulated video, created by a known producer, spread misinformation and caused public confusion and controversy, highlighting the social harm of AI-driven disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system used to generate a manipulated video (deepfake) that falsely attributes statements to a public figure, misleading the public and potentially exacerbating social tensions. The harm is realized as misinformation spreads widely, impacting communities and violating rights to truthful information. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securitySafetyRespect of human rightsDemocracy & human autonomyPrivacy & data governance

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPublic interestPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Guerre Israël-Hamas. Une vidéo de Bella Hadid truquée par l'IA fait croire qu'elle soutient Israël

2023-11-03
Ouest France
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a manipulated video (deepfake) that falsely attributes statements to a public figure, misleading the public and potentially exacerbating social tensions. The harm is realized as misinformation spreads widely, impacting communities and violating rights to truthful information. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation.
Thumbnail Image

"Aux côtés d'Israël": une fausse vidéo de Bella Hadid largement relayée sur les réseaux sociaux

2023-11-02
BFMTV
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate manipulated synthetic media (deepfake video). The harm is indirect but significant: misinformation and reputational damage, which can be considered harm to communities and a violation of rights (e.g., right to truthful information). Since the manipulated video has been widely disseminated and could mislead many, this constitutes an AI Incident under the framework, as the AI system's use has directly led to harm in the form of misinformation and social harm. The presence of platform warnings and content moderation efforts further confirm the recognition of harm potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Bella Hadid s'excuse auprès d'Israël . Attention à cette vidéo générée par l'IA

2023-11-02
Businessnews.com.tn | Journal électronique de Tunisie
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a deepfake video that falsely portrays Bella Hadid making statements she never made. This AI-generated misinformation has been widely disseminated, causing confusion and controversy, which constitutes harm to communities and potentially violates rights related to truthful information and reputation. The AI system's use directly led to this harm, meeting the criteria for an AI Incident. The article does not merely warn about potential harm but documents actual harm caused by the AI-generated content.
Thumbnail Image

Une réaction trafiquée par l'IA de Bella Hadid sur Israël fait des millions de vues sur les réseaux

2023-11-02
Numerama.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated video (deepfake) that falsely attributes statements to a public figure, leading to misinformation and potential social harm. The AI's use in fabricating and spreading false content that influences public perception and discourse constitutes an AI Incident because it directly leads to harm to communities through misinformation and manipulation. The event involves the use and misuse of AI-generated content causing realized harm, not just potential harm.
Thumbnail Image

"Aux côtés d'Israël, contre la terreur" : attention à cette vidéo virale de Bella Hadid, elle a été trafiquée par l'IA

2023-11-02
midilibre.fr
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The video has been widely viewed and could plausibly lead to harm such as misinformation, reputational damage, and social unrest, especially given the sensitive context of the Israel-Hamas conflict. Since the harm is potential and not confirmed as having occurred, this qualifies as an AI Hazard rather than an AI Incident. The article also mentions platform policies against such synthetic media, indicating awareness of the risk. Therefore, the classification is AI Hazard.
Thumbnail Image

FACT CHECK: Video of Bella Hadid supporting Israel is AI-generated

2023-11-06
Rappler
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a form of AI-generated manipulated content. The deepfake has been widely viewed and spreads false claims that could exacerbate social and political tensions, thus harming communities. The harm is realized as the misinformation is actively disseminated and believed by many, fulfilling the criteria for an AI Incident under harm to communities. The AI system's use in creating and spreading the deepfake is directly linked to this harm.
Thumbnail Image

Deepfake of Bella Hadid misrepresents her statements on Israel

2023-11-03
Fact Check
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically AI-based deepfake generation technology, used to create manipulated audiovisual content. The misuse of this AI system has directly led to harm by spreading false information that misrepresents a public figure's stance on a sensitive geopolitical issue, which can exacerbate social discord and misinformation. Therefore, this qualifies as an AI Incident due to harm to communities through misinformation and manipulation of public discourse.