AI-Generated Deepfakes Fuel Misinformation During Middle East Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During the recent American-Israeli attacks on Iran and subsequent reprisals, both sides and their supporters used AI-generated images and videos to spread false narratives online. These deepfakes and fabricated visuals, widely viewed on social media, have contributed to significant misinformation and confusion about the conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fabricated videos and images that are actively spreading false narratives about the conflict, leading to misinformation and confusion among the public. This is a direct use of AI systems causing harm to communities by distorting information and undermining truthful communication. The widespread dissemination of these AI-generated false materials has already occurred, fulfilling the criteria for an AI Incident. The article also mentions the platform X taking measures to suspend revenue distribution for AI-generated conflict videos, indicating recognition of the harm caused. Therefore, this event is best classified as an AI Incident due to the realized harm from AI-generated disinformation.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Como o conflito no Oriente Médio gera onda de desinformação com vídeos fabricados e IA

2026-03-04
Correio do povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated videos and images that are actively spreading false narratives about the conflict, leading to misinformation and confusion among the public. This is a direct use of AI systems causing harm to communities by distorting information and undermining truthful communication. The widespread dissemination of these AI-generated false materials has already occurred, fulfilling the criteria for an AI Incident. The article also mentions the platform X taking measures to suspend revenue distribution for AI-generated conflict videos, indicating recognition of the harm caused. Therefore, this event is best classified as an AI Incident due to the realized harm from AI-generated disinformation.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação

2026-03-04
Folha - PE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and videos being used to fabricate false narratives about military attacks, which are widely viewed and contribute to misinformation. This misinformation harms communities by distorting public understanding of the conflict, which fits the definition of harm to communities. The AI systems' outputs are pivotal in creating and spreading this disinformation. Hence, this qualifies as an AI Incident because the AI-generated content has directly led to harm through the spread of false information in a conflict context.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação - Jornal de Brasília

2026-03-04
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false visual content that is widely disseminated and causes harm by misleading the public and exacerbating conflict narratives, which qualifies as harm to communities. The AI-generated misinformation is actively contributing to the harm, making this an AI Incident. The article details realized harm rather than just potential harm, and the AI's role is pivotal in creating and spreading the false content. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

'Guerra de narrativas': conflito no Oriente Médio gera onda de desinformação

2026-03-04
GZH
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the generation of visual content (deepfakes) that misrepresent real-world events, leading to misinformation spreading rapidly online. This misinformation harms communities by distorting facts about the conflict, which is a form of harm to communities as defined. The article reports that these AI-generated materials have already been viewed millions of times and are actively contributing to confusion and disinformation, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient

2026-03-04
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated images and videos that are disseminated to misinform and manipulate public perception during a military conflict. The harm is realized as the disinformation spreads widely, causing confusion, misinformation, and potential escalation of tensions, which constitutes harm to communities. The AI's role is pivotal in generating convincing fake content that fuels the disinformation campaign. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and directly linked to AI-generated content.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient

2026-03-04
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating misleading visual content that is actively disseminated to distort public perception of military events, which constitutes harm to communities through misinformation. The AI's role is pivotal in producing convincing fake visuals that fuel the disinformation war. Since the harm (misinformation causing social disruption and confusion) is occurring and linked directly to AI-generated content, this qualifies as an AI Incident under the framework.
Thumbnail Image

Une guerre de la désinformation au Moyen-Orient

2026-03-04
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate misleading and false visual content (images and videos) that are actively spreading misinformation about a military conflict. This misinformation is causing harm by confusing the public and distorting perceptions of the conflict, which is a form of harm to communities and the information environment. The AI-generated content is directly linked to the harm, as it is the vehicle for the disinformation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harm is ongoing and realized.
Thumbnail Image

La guerre de la désinformation fait aussi rage au Moyen-Orient : Actualités - Orange

2026-03-04
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions visuals generated by AI that are used to spread false information about military strikes and events, which have garnered millions of views and significantly contribute to confusion and misinformation online. This constitutes harm to communities as defined by the framework. The AI systems' use in generating misleading content directly leads to this harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and realized.
Thumbnail Image

Désinformation au Moyen-Orient : Quand l'IA et la " bouillie de guerre " saturent les plateformes - ZDNET

2026-03-05
ZDNet
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate false and misleading content (deepfakes, synthetic images, fake accounts) that is actively causing harm by confusing and misleading the public during a military conflict. The article explicitly states that these AI-generated contents are saturating platforms and evading verification tools, leading to misinformation spread. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and disinformation. The platform's response (suspending revenue sharing for AI-generated conflict videos without disclosure) is a complementary governance action but does not change the classification of the core event as an AI Incident.
Thumbnail Image

'War of narratives': Disinformation surges as conflict roils Middle East

2026-03-04
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated combat visuals and synthetic content being used to spread false information about the conflict, which has garnered millions of views and contributed to confusion and misinformation. The AI system's outputs are directly linked to harm to communities by distorting facts and undermining authentic information during a war, fulfilling the criteria for an AI Incident. The involvement of AI in generating misleading content that is actively causing harm distinguishes this from a mere hazard or complementary information. The harm is realized and ongoing, not just potential.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of AI-generated combat visuals and videos that are false and misleading. The use of these AI-generated materials has directly led to harm by spreading disinformation that affects public understanding and potentially escalates conflict tensions, which is harm to communities. The article also mentions platform responses to mitigate this harm, but the primary focus is on the ongoing disinformation causing real-world harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The involvement of AI-generated content in spreading false information that influences public perception and social stability constitutes harm to communities. Since the AI-generated visuals are actively used to mislead and propagate disinformation in a conflict context, this is a direct link to harm caused by the use of AI systems. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

'Narrative war': disinformation surges as conflict roils Middle East

2026-03-04
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated combat visuals and videos being used to spread false information about the conflict, which is causing real harm by confusing and misleading millions of people. The AI systems' outputs are directly contributing to the spread of disinformation, which harms communities and the information ecosystem. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities through misinformation and manipulation of public narratives during an active conflict.
Thumbnail Image

Disinformation tactics thrive online as Iran war grips Middle East | | AW

2026-03-06
AW
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of AI-generated combat visuals and manipulated images used in disinformation campaigns. The harm is realized as these false narratives and AI-generated content mislead millions, causing informational harm to communities and undermining the right to access authentic information during a critical conflict. The article also notes the platform's response to mitigate this harm, confirming the significance of AI's role in the incident. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by AI-generated disinformation.
Thumbnail Image

Irão. IA e algoritmos impulsiona desinformação sobre guerra

2026-03-06
SAPO
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating false images and videos and algorithms that prioritize polarizing and sensationalist content, which directly leads to harm to communities through misinformation and social disruption. The disinformation is actively occurring and causing harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. Therefore, this event is classified as an AI Incident due to the realized harm from AI-enabled disinformation.
Thumbnail Image

Irão: Conflito desencadeia "guerra de narrativas" com imagens de IA

2026-03-04
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating misleading images and videos that have been actively used to spread false information about the conflict, leading to harm to communities through misinformation and confusion. The article explicitly mentions AI-generated content causing millions of views and contributing to a 'war of narratives' online, which directly harms the public's right to accurate information and fuels conflict-related misinformation. This meets the criteria for an AI Incident as the AI system's use has directly led to significant harm to communities.
Thumbnail Image

Irão: IA e algoritmos impulsionam onda de desinformação sobre conflito

2026-03-06
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and amplifying false and misleading content about a military conflict, which has been widely disseminated and has caused harm to communities by spreading disinformation and potentially escalating tensions. The AI's role in producing fabricated images and videos is central to the harm, fulfilling the criteria for an AI Incident. The article also mentions the suspension of revenue sharing for AI-generated conflict videos, indicating recognition of the harm caused. Hence, the event is classified as an AI Incident.
Thumbnail Image

Conflito no Médio Oriente desencadeia "guerra de narrativas" com imagens geradas por IA

2026-03-04
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false images that are actively disseminated to mislead and manipulate public perception during a conflict, which constitutes harm to communities through misinformation. This meets the definition of an AI Incident because the AI-generated content has directly led to significant harm by spreading false narratives and confusion in a sensitive geopolitical context. The article also mentions platform responses to mitigate this harm, but the primary focus is on the realized harm caused by AI-generated disinformation.
Thumbnail Image

IA e algoritmos impulsionam onda de desinformação sobre conflito no Médio Oriente

2026-03-06
DNOTICIAS.PT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and videos being used to spread false narratives about the conflict, with millions of views on social media platforms. The AI systems (algorithms and generative AI) are directly involved in creating and amplifying disinformation, which harms communities by misleading them and potentially escalating tensions. This constitutes an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination.
Thumbnail Image

Irão: Conflito desencadeia "guerra de narrativas" com imagens geradas por IA

2026-03-04
ECO
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating misleading images that are actively used to spread disinformation in the context of a military conflict. This disinformation causes harm to communities by distorting public perception and potentially escalating tensions. The AI-generated content is not hypothetical or potential but is currently being disseminated and causing real-world informational harm. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing harm through misinformation and manipulation of public narratives during a conflict.
Thumbnail Image

Not every blast is real: How AI-generated videos are poisoning the Iran war narrative

2026-03-07
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by state-linked actors to spread false narratives about the Iran war. The harm is realized as misinformation is actively disseminated, misleading the public and distorting the conflict narrative. This constitutes harm to communities and a violation of rights related to truthful information. The AI system's role in generating these videos is pivotal to the incident, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a clear case of AI misuse causing harm.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-07
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fabricated videos that are disseminated by state actors as part of influence campaigns. This use of AI-generated misinformation directly harms communities by spreading false narratives about the war, which can exacerbate conflict, mislead populations, and undermine social stability. The harm is realized and ongoing, meeting the criteria for an AI Incident due to violations of informational integrity and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

State Actors Are Behind Much of the Visual Misinformation About the Iran War

2026-03-07
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by state actors to spread false information about the Iran war, which constitutes harm to communities through misinformation and propaganda. The AI systems' outputs are central to the incident, as they create fabricated visual content that misleads the public and exacerbates conflict-related tensions. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities by spreading misinformation and influencing perceptions during a conflict.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-07
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by state actors to spread false information about the Iran war, which is causing confusion and misinformation among the public. This misinformation campaign is ongoing and actively influencing perceptions, thus directly leading to harm to communities through the spread of false narratives and propaganda. The involvement of AI in generating fake videos is central to the harm described. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation in a conflict context.
Thumbnail Image

AI behind much of visual war misinformation

2026-03-07
Arab News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false videos that mislead the public about war events, constituting misinformation that harms communities by spreading false narratives. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities through misinformation. The article also mentions platform responses but the primary focus is on the harm caused by AI-generated misinformation.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-07
WHAS 11 Louisville
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos used in misinformation campaigns, which are actively spreading false narratives and causing harm to communities by undermining their sense of safety and influencing behavior. The involvement of AI in generating realistic fake videos and the use of AI in coordinated influence operations directly leads to harm as defined by the framework. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation and manipulation.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-07
The Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by state actors and others to spread false information about the Iran war. This use of AI-generated content has directly led to harm by misleading the public and amplifying propaganda, which constitutes harm to communities. The involvement of AI in generating fabricated videos that are widely shared and believed fulfills the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal in producing and spreading misinformation.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-08
Channel 3000
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate false visual content that is disseminated by state actors and others to misinform the public about a war, which constitutes harm to communities through misinformation and manipulation. The AI-generated videos are central to the incident, directly causing the spread of false narratives and undermining trust and safety. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs in the context of information operations and propaganda during an armed conflict.
Thumbnail Image

State actors are behind much of the visual misinformation about the Iran war

2026-03-08
Tucson
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fabricated videos that are widely disseminated to mislead the public about the Iran war. The harm is realized as misinformation and disinformation campaigns are actively influencing communities and public understanding, which is a form of harm to communities and potentially a violation of rights to accurate information. The AI-generated content is central to the incident, not merely background or potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump Warns Media Over 'Fake' War Reports, Claims 'Iran Using AI To Fabricate Attacks On US Assets'

2026-03-16
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos used by Iran to fabricate false reports, indicating AI system involvement in disinformation. However, there is no evidence presented that these AI-generated materials have directly or indirectly caused harm such as injury, rights violations, or disruption. The focus is on allegations and political commentary about misinformation and media credibility, with no confirmed incident of harm or disruption. The mention of regulatory review by the FCC chairman is a governance response, fitting the definition of Complementary Information. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard but provides important context and updates on AI-related disinformation and societal responses.
Thumbnail Image

Trump Unloads On 'Corrupt Media Outlets' That Parrot Iranian Misinformation

2026-03-16
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used by state-linked actors to spread false information and propaganda, which is actively misleading the public and media outlets. This misinformation is causing harm to communities by creating confusion and distrust during a conflict, fulfilling the harm criteria under (d) harm to communities. The AI system's role in generating and amplifying this disinformation is pivotal and directly linked to the harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump threatens treason charges over US media coverage of Iran war

2026-03-16
Middle East Eye
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated misinformation being spread, which is causing harm by misleading the public and potentially influencing perceptions of a conflict. This fits the definition of an AI Incident because the AI system's use in generating false content has directly led to harm to communities through misinformation. The political accusations and calls for treason charges are responses to this harm but do not change the classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Trump Rages Media Should Be Charged With 'TREASON' Over Iran-Peddled AI Clips

2026-03-16
Mediaite
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos used as disinformation by Iran, which have been disseminated by media outlets. This use of AI-generated content to spread false information about military events constitutes harm to communities by misleading the public and potentially affecting national security perceptions. The AI system's use in generating and spreading false content directly leads to harm as defined by the framework. Although the article focuses on political accusations, the underlying AI-generated disinformation is a realized harm, not just a potential one, thus qualifying as an AI Incident.
Thumbnail Image

War: 'They're really good' - Trump names battles Iran is winning

2026-03-16
Daily Post Nigeria
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence by Iran to create and spread misleading images and videos as part of media and propaganda battles. This use of AI directly leads to harm by spreading disinformation, which affects communities and public understanding of the conflict. The harm is realized, not just potential, as the disinformation is actively being disseminated and influencing narratives. Hence, it meets the criteria for an AI Incident under the harm to communities category.
Thumbnail Image

'Iran master of media manipulation': Trump accuses Tehran of AI-driven 'false information'

2026-03-16
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation allegedly used by Iran, indicating the involvement of AI systems in generating false content. However, the piece is primarily a political statement and accusation without concrete evidence or reports of actual harm caused by the AI-generated disinformation. There is no direct or indirect confirmation of injury, rights violations, disruption, or other harms resulting from the AI use described. The focus is on the discourse and claims about AI's role in media manipulation, which fits the definition of Complementary Information as it provides context and societal response rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Trump Accuses Iran of Using AI to Spread War Disinformation

2026-03-16
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of generating disinformation, which is a recognized AI-related risk. However, the content is primarily a political accusation without verified evidence of realized harm or direct causation of harm by AI systems. Since the article discusses the potential use of AI-generated disinformation and its amplification by media, it points to a plausible risk of harm to communities through misinformation, but does not document an actual incident. Therefore, it fits the definition of an AI Hazard, as the use of AI for disinformation could plausibly lead to harm, but no confirmed AI Incident is described.
Thumbnail Image

Trump Blasts Iran For AI Propaganda, Says Regime Is 'Being Annihilated'

2026-03-16
matzav.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation used by Iran as a propaganda tool, which fits the definition of an AI system's use potentially leading to harm (misinformation affecting public perception and possibly social stability). However, the article focuses on allegations and claims without confirming that this disinformation has caused direct or indirect harm yet. The harm is plausible given the nature of AI-generated fake content in conflict scenarios, but no concrete incident of harm is described. Thus, the event is best classified as an AI Hazard, reflecting the credible risk of AI-enabled propaganda causing harm in the future.
Thumbnail Image

'Iran Master of Media Manipulation': Donald Trump Accuses Tehran of AI‑Driven 'False Information' on US Military Assets | LatestLY

2026-03-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of alleged use for generating disinformation, which could plausibly lead to harm such as misinformation affecting public perception and trust. However, the article primarily reports accusations without confirming that AI-generated disinformation has actually caused harm or disruption. Therefore, this situation fits the definition of an AI Hazard, as it describes a plausible risk of AI-driven disinformation being used as a weapon, but does not document a realized AI Incident.
Thumbnail Image

World News | 'Iran Master of Media Manipulation': Trump Accuses Tehran of AI-driven 'false Information' | LatestLY

2026-03-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of AI-generated disinformation, which is a recognized risk. However, the article primarily presents accusations and political commentary without concrete evidence or examples of realized harm caused by AI-generated disinformation. Since the harm is potential and the article focuses on the possibility and political implications rather than confirmed incidents, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it does not provide updates or responses to a known incident, nor is it unrelated since AI-generated disinformation is central to the narrative.
Thumbnail Image

A new front opens in the Middle East war: The battle over what is real

2026-03-16
The Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran to generate fake videos and images about military actions, which are misleading and false. This use of AI-generated disinformation is causing harm by spreading false narratives about the conflict, which affects the information environment and communities' perception of reality. The harm is realized, not just potential, as these misleading visuals are actively circulating and influencing public opinion. Hence, this fits the definition of an AI Incident involving violations of rights to truthful information and harm to communities through misinformation.
Thumbnail Image

President Trump Calls Out Iran For Making FAKE AI War Videos - Conservative Angle

2026-03-16
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos used as disinformation by Iran, which is causing harm by misleading the public and manipulating perceptions of military strength. This is a direct AI Incident because the AI system's use in creating and spreading false content has led to harm to communities through misinformation. The FCC's potential regulatory actions are complementary information but do not negate the primary classification as an AI Incident due to realized harm from AI-generated disinformation.
Thumbnail Image

Trump Warns that Iran Is Using AI to Create 'Disinformation Weapons'

2026-03-16
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation being used by Iran to deceive and demoralize populations, with concrete examples of fake videos and images that have been widely viewed and have influenced public perception. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation. The harm is realized, not just potential, and the AI system's role is pivotal in creating convincing synthetic content that spreads false narratives. The article also references law enforcement actions against individuals spreading AI-generated false content, further confirming the harm has materialized.
Thumbnail Image

Trump acusa Irã de usar IA para criar e difundir notícias falsas sobre a guerra

2026-03-16
InfoMoney
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the context of generating false news, which is a recognized AI-related risk. However, the report is about an accusation and does not confirm that AI-generated misinformation has already caused harm. Therefore, it represents a plausible risk of harm (AI Hazard) rather than a confirmed incident. The potential harm includes misinformation affecting communities and public discourse, but since no direct harm is documented, it is best classified as an AI Hazard.
Thumbnail Image

It's Fake: Trump Accuses Iran of Using AI to Consistently Fool America's 'No Credibility' Liberal Press

2026-03-16
Redstate
Why's our monitor labelling this an incident or hazard?
While the article mentions AI-generated fake content used as disinformation, it does not describe any realized harm or incidents resulting from this AI use. The focus is on allegations and political commentary rather than documented AI-driven harm or disruption. Therefore, this is best classified as Complementary Information, providing context on concerns about AI's role in disinformation without reporting a specific AI Incident or Hazard.
Thumbnail Image

Trump Threatens to Charge Reporters With Treason as Iran War Spirals

2026-03-16
The New Republic
Why's our monitor labelling this an incident or hazard?
The article centers on political accusations that AI is being used to create fake videos, which is a plausible AI-related misinformation risk. However, there is no direct evidence or report of actual harm caused by these AI-generated videos or confirmed misuse leading to injury, rights violations, or other harms. The event is about public discourse and threats of regulatory action rather than a documented AI Incident or a clear AI Hazard. Thus, it aligns with Complementary Information, as it provides context on societal and governance reactions to AI misinformation without describing a specific AI Incident or Hazard.
Thumbnail Image

Trump acusa Irã de usar IA para criar e difundir notícias falsas sobre a guerra

2026-03-16
O Povo
Why's our monitor labelling this an incident or hazard?
While the accusation involves AI being used to generate false news, the article does not confirm that such AI-generated misinformation has actually caused harm or disruption. The claim is about potential misuse of AI for misinformation, which could plausibly lead to harm such as misinformation spreading and harm to communities, but no concrete incident of harm is described. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no harm is confirmed yet.
Thumbnail Image

US Prez Trump accuses Iran of spreading lies

2026-03-17
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to generate fake news and disinformation by Iran, which is causing harm by spreading false narratives about military conflicts. This constitutes harm to communities through misinformation and manipulation of public perception. Since the harm is occurring through the dissemination of AI-generated false information, this qualifies as an AI Incident. The AI system's use in creating and spreading disinformation has directly led to harm in the form of misinformation and societal disruption.
Thumbnail Image

Trump acusa Irã de usar IA para criar e difundir notícias falsas sobre a guerra - Diário do Grande ABC

2026-03-16
Jornal Diário do Grande ABC
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the context of alleged use for generating false news, which could plausibly lead to harm such as misinformation and social disruption. However, the article only reports accusations without confirming that AI-generated misinformation has actually caused harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident (harm through misinformation), but no realized harm is documented in the article.
Thumbnail Image

President Trump Calls Out Iran For Making FAKE AI War Videos * 100PercentFedUp.com * by Anthony

2026-03-16
100 Percent Fed Up
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake war videos, which are actively disseminated and cause harm by spreading misinformation and manipulating public opinion. This constitutes harm to communities through disinformation, a recognized form of harm under the AI Incident definition. The AI system's use in creating and spreading false content directly leads to this harm. Therefore, this qualifies as an AI Incident. The mention of regulatory threats is a complementary detail but does not change the primary classification.
Thumbnail Image

Trump Accuses Iran of Using AI as a 'Weapon,' Warns Reporters Must Be 'Very Careful'

2026-03-16
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The article centers on the alleged use of AI by Iran to create disinformation, which could plausibly lead to harm in terms of misinformation and public deception, affecting communities and media trust. Since the harm is described as potential and the article mainly reports accusations and warnings without confirmed incidents of harm caused by AI, this fits the definition of an AI Hazard. The involvement of AI is reasonably inferred from the description of AI-generated propaganda. The article also includes societal/governance responses (FCC threats), but the primary focus is on the potential harm from AI misuse rather than a response to a past incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Trump acusa Irã de usar IA para criar e difundir notícias falsas sobre a guerra

2026-03-16
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to create and spread false news, which fits the definition of an AI system involved in misinformation generation. However, the event is framed as an accusation without evidence of realized harm or confirmed AI-generated content causing harm. Thus, it represents a plausible risk of harm (AI Hazard) rather than a confirmed AI Incident. There is no indication of responses, legal actions, or updates that would classify this as Complementary Information, nor is it unrelated to AI. Hence, the classification is AI Hazard.
Thumbnail Image

Trump Threatens Fake News Media Outlets With Treason Charges Over Iran War Coverage

2026-03-16
The People's Voice
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated disinformation being used as a propaganda tool, which is a plausible risk of harm to communities through misinformation. However, the article mainly reports on accusations and political statements rather than confirmed harm or incidents caused by AI systems. Since the harm is potential and the event centers on the risk and political reaction rather than a confirmed AI-driven harm, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.