AI-Generated Image Shared as War Propaganda in Israel-Hamas Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Conservative commentator Ben Shapiro shared an AI-generated image purporting to show a 'burnt Jewish baby' following a Hamas attack on Israel. The image, confirmed as fake by detection tools, was used to promote a misleading narrative, fueling misinformation and exacerbating tensions during the conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to generate a graphic and misleading image that is being disseminated publicly during a conflict. This AI-generated content can cause harm to communities by spreading misinformation and inflaming tensions, which constitutes harm to communities and potentially violates rights. Since the AI-generated image has been shared and is influencing public perception, this is a realized harm linked to the use of an AI system.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicCivil societyOther

Harm types
PsychologicalReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fact Check: Ben Shapiro Shares AI-Generated Image Of 'Burnt Baby' Amid Israel-Hamas War

2023-10-13
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a graphic and misleading image that is being disseminated publicly during a conflict. This AI-generated content can cause harm to communities by spreading misinformation and inflaming tensions, which constitutes harm to communities and potentially violates rights. Since the AI-generated image has been shared and is influencing public perception, this is a realized harm linked to the use of an AI system.
Thumbnail Image

Is the Ben Shapiro burnt baby photo real or fake? Origin explored as host comes under fire over AI generated claim

2023-10-13
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The event centers on the use and dissemination of a graphic image that some allege to be AI-generated, but which is also claimed to be authentic by official sources. While AI detection tools are discussed, the core issue is about misinformation and the authenticity of the image rather than a malfunction or misuse of an AI system causing direct or indirect harm. The AI system's role is limited to detection tools and alleged generation of images, but no direct or indirect harm caused by the AI system itself is established. The event primarily provides context and updates on the controversy and public discourse around AI-generated content and misinformation, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Israel is using AI in promoting its war on Gaza! | Al Bawaba

2023-10-15
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images being used to promote false or misleading narratives about war crimes in the Israel-Gaza conflict. The AI system's outputs (fake images) have directly led to misinformation and social harm by misleading the public and exacerbating conflict tensions. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation of public perception in a sensitive and violent context.
Thumbnail Image

Ben Shapiro slammed for sharing 'AI image of Israeli baby'

2023-10-13
The New Arab
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fake image that was then shared as if it were real, leading to misinformation about a sensitive and violent conflict. This misinformation can cause harm to communities by exacerbating tensions and spreading false narratives. The AI-generated content's role in this misinformation is direct and pivotal, meeting the criteria for an AI Incident due to harm to communities.
Thumbnail Image

Israel is using AI in promoting its war on Gaza! | Al Bawaba

2023-10-15
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images being used to promote false or misleading narratives about war crimes in the Israel-Gaza conflict. The AI system's outputs (images) are directly involved in spreading misinformation, which harms communities by distorting facts and potentially exacerbating conflict-related harms. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation of public opinion in a sensitive and violent context.