AI-Generated Deepfakes Fuel Misinformation During Middle East Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During the recent American-Israeli attacks on Iran and subsequent reprisals, both sides and their supporters used AI-generated images and videos to spread false narratives online. These deepfakes and fabricated visuals, widely viewed on social media, have contributed to significant misinformation and confusion about the conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fabricated videos and images that are actively spreading false narratives about the conflict, leading to misinformation and confusion among the public. This is a direct use of AI systems causing harm to communities by distorting information and undermining truthful communication. The widespread dissemination of these AI-generated false materials has already occurred, fulfilling the criteria for an AI Incident. The article also mentions the platform X taking measures to suspend revenue distribution for AI-generated conflict videos, indicating recognition of the harm caused. Therefore, this event is best classified as an AI Incident due to the realized harm from AI-generated disinformation.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Como o conflito no Oriente Médio gera onda de desinformação com vídeos fabricados e IA

2026-03-04
Correio do povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated videos and images that are actively spreading false narratives about the conflict, leading to misinformation and confusion among the public. This is a direct use of AI systems causing harm to communities by distorting information and undermining truthful communication. The widespread dissemination of these AI-generated false materials has already occurred, fulfilling the criteria for an AI Incident. The article also mentions the platform X taking measures to suspend revenue distribution for AI-generated conflict videos, indicating recognition of the harm caused. Therefore, this event is best classified as an AI Incident due to the realized harm from AI-generated disinformation.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação

2026-03-04
Folha - PE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and videos being used to fabricate false narratives about military attacks, which are widely viewed and contribute to misinformation. This misinformation harms communities by distorting public understanding of the conflict, which fits the definition of harm to communities. The AI systems' outputs are pivotal in creating and spreading this disinformation. Hence, this qualifies as an AI Incident because the AI-generated content has directly led to harm through the spread of false information in a conflict context.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação - Jornal de Brasília

2026-03-04
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false visual content that is widely disseminated and causes harm by misleading the public and exacerbating conflict narratives, which qualifies as harm to communities. The AI-generated misinformation is actively contributing to the harm, making this an AI Incident. The article details realized harm rather than just potential harm, and the AI's role is pivotal in creating and spreading the false content. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação

2026-03-04
GZH
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the generation of visual content (deepfakes) that misrepresent real-world events, leading to misinformation spreading rapidly online. This misinformation harms communities by distorting facts about the conflict, which is a form of harm to communities as defined. The article reports that these AI-generated materials have already been viewed millions of times and are actively contributing to confusion and disinformation, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient

2026-03-04
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated images and videos that are disseminated to misinform and manipulate public perception during a military conflict. The harm is realized as the disinformation spreads widely, causing confusion, misinformation, and potential escalation of tensions, which constitutes harm to communities. The AI's role is pivotal in generating convincing fake content that fuels the disinformation campaign. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and directly linked to AI-generated content.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient

2026-03-04
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating misleading visual content that is actively disseminated to distort public perception of military events, which constitutes harm to communities through misinformation. The AI's role is pivotal in producing convincing fake visuals that fuel the disinformation war. Since the harm (misinformation causing social disruption and confusion) is occurring and linked directly to AI-generated content, this qualifies as an AI Incident under the framework.
Thumbnail Image

Une guerre de la désinformation au Moyen-Orient

2026-03-04
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate misleading and false visual content (images and videos) that are actively spreading misinformation about a military conflict. This misinformation is causing harm by confusing the public and distorting perceptions of the conflict, which is a form of harm to communities and the information environment. The AI-generated content is directly linked to the harm, as it is the vehicle for the disinformation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harm is ongoing and realized.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient : Actualités - Orange

2026-03-04
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions visuals generated by AI that are used to spread false information about military strikes and events, which have garnered millions of views and significantly contribute to confusion and misinformation online. This constitutes harm to communities as defined by the framework. The AI systems' use in generating misleading content directly leads to this harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and realized.
Thumbnail Image

'War of narratives': Disinformation surges as conflict roils Middle East

2026-03-04
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated combat visuals and synthetic content being used to spread false information about the conflict, which has garnered millions of views and contributed to confusion and misinformation. The AI system's outputs are directly linked to harm to communities by distorting facts and undermining authentic information during a war, fulfilling the criteria for an AI Incident. The involvement of AI in generating misleading content that is actively causing harm distinguishes this from a mere hazard or complementary information. The harm is realized and ongoing, not just potential.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of AI-generated combat visuals and videos that are false and misleading. The use of these AI-generated materials has directly led to harm by spreading disinformation that affects public understanding and potentially escalates conflict tensions, which is harm to communities. The article also mentions platform responses to mitigate this harm, but the primary focus is on the ongoing disinformation causing real-world harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The involvement of AI-generated content in spreading false information that influences public perception and social stability constitutes harm to communities. Since the AI-generated visuals are actively used to mislead and propagate disinformation in a conflict context, this is a direct link to harm caused by the use of AI systems. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated combat visuals and videos being used to spread false information about the conflict, which is causing real harm by confusing and misleading millions of people. The AI systems' outputs are directly contributing to the spread of disinformation, which harms communities and the information ecosystem. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities through misinformation and manipulation of public narratives during an active conflict.