Trump Accuses Iran of Using AI for Disinformation During Wartime

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. President Donald Trump accused Iran of using artificial intelligence to generate fake images and misinformation about wartime events, alleging Western media outlets spread these AI-generated materials. The claims highlight concerns about AI-driven disinformation but lack evidence of confirmed harm or incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI being used as a 'disinformation weapon' by Iran, implying AI systems generate fake images and false narratives. This fits the definition of an AI system involved in spreading misinformation. The harm described is potential harm to communities through misinformation and social disruption, which aligns with harm category (d). Since the article reports accusations and concerns without confirmed incidents of harm or direct evidence of AI-generated disinformation causing actual harm, it fits the definition of an AI Hazard rather than an AI Incident. The event highlights a credible risk that AI-generated disinformation could lead to significant harm, but no direct or indirect harm has been established yet.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicBusiness

Harm types
Public interestReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used as a 'disinformation weapon' by Iran, implying AI systems generate fake images and false narratives. This fits the definition of an AI system involved in spreading misinformation. The harm described is potential harm to communities through misinformation and social disruption, which aligns with harm category (d). Since the article reports accusations and concerns without confirmed incidents of harm or direct evidence of AI-generated disinformation causing actual harm, it fits the definition of an AI Hazard rather than an AI Incident. The event highlights a credible risk that AI-generated disinformation could lead to significant harm, but no direct or indirect harm has been established yet.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
Reuters
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of alleged use for disinformation, which could plausibly lead to harm to communities through misinformation and social disruption. However, the claims are accusations without confirmed evidence of actual AI-generated disinformation causing harm. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk of AI being used maliciously to spread disinformation, but does not document a confirmed AI Incident with realized harm.
Thumbnail Image

Trump Accuses Iran of Using AI to Spread Disinformation

2026-03-16
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of generating disinformation, which is a recognized harm to communities and public trust. However, the article focuses on accusations and claims without confirming that the AI-generated disinformation has directly led to harm or violations. Therefore, this situation represents a plausible risk of harm from AI use in disinformation campaigns but does not document realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The article centers on unsubstantiated claims by a political figure about AI-generated disinformation, without evidence of actual harm or incident caused by AI systems. While the use of AI for disinformation is a recognized risk, the article does not confirm any realized harm or a credible near miss. The focus is on political accusations and media dynamics rather than a concrete AI Incident or Hazard. Thus, it fits the definition of Complementary Information, providing context and societal response to AI-related concerns without describing a specific incident or hazard.
Thumbnail Image

'AI can be very dangerous': Trump accuses Iran of using AI as 'disinformation weapon' amid West Asia crisis

2026-03-16
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake news and disinformation as a weapon allegedly used by Iran, which involves AI systems in the creation of misleading content. The harms described—misinformation, damage to public trust, and potential social disruption—are consistent with harms to communities and rights. However, the article does not provide evidence that these harms have already materialized or that the AI systems have directly caused an incident. Instead, it focuses on warnings and accusations about potential misuse of AI for disinformation. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm, but no confirmed incident has occurred yet. The political and media context supports the plausibility of future harm but does not confirm it.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
ThePrint
Why's our monitor labelling this an incident or hazard?
The article centers on allegations that AI is being used by Iran to create false narratives and images, which if true, would constitute an AI Incident due to harm to communities through misinformation. However, the article itself does not confirm the existence or impact of such AI-generated disinformation, presenting the claims as accusations without verified evidence of realized harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to harm but does not document actual harm or incident occurrence.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
Khaleej times
Why's our monitor labelling this an incident or hazard?
The article centers on allegations that AI is being used by Iran to spread disinformation, which could plausibly lead to harm such as misinformation affecting public perception and geopolitical tensions. However, there is no verified evidence presented that these AI-generated disinformation campaigns have directly caused harm or violations as defined by the framework. The event thus fits the definition of an AI Hazard, as it highlights a credible potential for AI-driven disinformation to cause harm, but no confirmed AI Incident has occurred based on the information provided.
Thumbnail Image

Fire contained in vicinity of Dubai airport after drone attack, flights suspended

2026-03-16
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves AI systems allegedly used for disinformation, which could harm communities by spreading false narratives. However, the article mainly reports accusations and unverified claims without concrete evidence of realized harm or direct causation. Therefore, it fits the definition of an AI Hazard, as the use of AI for disinformation could plausibly lead to harm, but no confirmed incident is established in the report.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used as a "disinformation weapon" by Iran, implying the use of AI systems to generate misleading content. This fits the definition of an AI system involved in generating outputs that influence virtual environments (public perception). The harm described is misinformation and disinformation, which can harm communities by distorting truth and public discourse. However, the article only reports accusations and claims without confirmed evidence of actual harm caused by AI-generated content. The potential for harm is credible and plausible given the nature of AI-generated disinformation, but the article does not document a realized incident of harm. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Trump Accuses Iran of Using AI to Spread Fake Wartime Propaganda

2026-03-16
Republic World
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of generating disinformation, which could plausibly lead to harm such as misinformation affecting communities or public trust. However, the claims are allegations without confirmed evidence of actual AI-generated content causing harm. There is no direct or indirect confirmation of harm occurring due to AI use, only a warning and political accusations. Therefore, this fits the definition of an AI Hazard, as it plausibly could lead to harm but no incident is confirmed.
Thumbnail Image

Trump Accuses Iran of Weaponizing AI in War Propaganda | Technology

2026-03-16
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used as a 'disinformation weapon' by Iran, indicating AI system involvement in spreading false narratives. While the harm (disinformation affecting public perception and possibly international relations) is a recognized form of harm to communities, the article frames this as an accusation without confirmed evidence or documented incidents of harm. Therefore, the event represents a plausible risk of harm from AI misuse rather than a confirmed incident. The political and media tensions described are consequences of these concerns but do not themselves constitute direct AI-caused harm. Hence, this is best classified as an AI Hazard, reflecting the credible potential for AI-enabled disinformation to cause harm.
Thumbnail Image

The image that infuriated Trump: They are producing it using artificial intelligence.

2026-03-16
Haberler.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images and misinformation used as propaganda, which involves an AI system's use. However, the harms described (misinformation, propaganda) are alleged and not confirmed as having caused direct or indirect harm such as injury, disruption, or rights violations. Since the article focuses on claims and warnings about AI-generated misinformation with potential to mislead and escalate tensions, but no confirmed harm has occurred, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it involves AI-generated content with potential harm.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
MEO
Why's our monitor labelling this an incident or hazard?
The article describes allegations of AI being used as a disinformation tool by Iran, which if true could lead to harm such as misinformation affecting public perception and political stability. However, the article does not confirm that AI-generated disinformation has actually caused harm or disruption. The claims remain unverified and no direct or indirect harm is documented. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to harm but no incident has been confirmed.
Thumbnail Image

Trump accuses Iran of using AI to spread disinformation

2026-03-16
Superhits 97.9 Terre Haute, IN
Why's our monitor labelling this an incident or hazard?
The article centers on accusations that AI is being used as a disinformation weapon, which implies a plausible risk of harm to communities through misinformation. However, no concrete evidence or confirmed incidents of AI-generated disinformation causing harm are presented. The event is about the potential misuse of AI for disinformation, making it an AI Hazard rather than an AI Incident. It is not complementary information because it does not provide updates or responses to a known incident, nor is it unrelated since AI and its misuse are central to the claims.
Thumbnail Image

Trump Accuses Iran of Using AI to Spread Disinformation - Jordan News | Latest News from Jordan, MENA

2026-03-16
Jordan News | Latest News from Jordan, MENA
Why's our monitor labelling this an incident or hazard?
The article describes allegations of AI-generated disinformation by Iran, which if true, could lead to harm such as misinformation and manipulation of public opinion (harm to communities). However, the article does not confirm that such AI-generated disinformation has been definitively identified or that harm has materialized. The focus is on the potential use and political accusations rather than documented incidents. Therefore, this qualifies as an AI Hazard, as the use of AI for disinformation could plausibly lead to harm, but no confirmed AI Incident is described.
Thumbnail Image

دونالد ترامب يتهم إيران بتوظف الذكاء الاصطناعي لنشر أخبار وصور مضللة

2026-03-16
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate misleading content such as fake images and videos, which are being disseminated to influence public perception. This constitutes harm to communities through misinformation and manipulation of public opinion, which fits the definition of an AI Incident. Although the article reports allegations rather than confirmed facts, the described use of AI-generated disinformation campaigns is consistent with realized harm as it affects societal trust and information accuracy. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by AI-enabled misinformation.
Thumbnail Image

ترمب يتهم إيران باستخدام الذكاء الاصطناعي لنشر معلومات مضللة

2026-03-16
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran to generate misleading content, which fits the definition of an AI system involved in disinformation. The harms described relate to misinformation that could disrupt communities and political processes, which aligns with harm to communities under the AI Incident definition. However, since the article mainly reports accusations without confirmed evidence of actual harm or widespread dissemination causing harm, it is more appropriate to classify this as an AI Hazard, reflecting the plausible risk of harm from AI-enabled disinformation rather than a confirmed AI Incident. There is no indication of a response or update to a prior incident, so it is not Complementary Information, and the event is clearly AI-related, so it is not Unrelated.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي للتضليل

2026-03-16
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate misleading content, which is a form of AI misuse. The harms described are primarily misinformation and social confusion, which can be considered harm to communities. However, the article mainly reports accusations and general trends without confirming specific realized harms directly caused by AI-generated content. Therefore, this situation represents a credible risk of harm from AI misuse but does not document a concrete AI Incident. It fits the definition of an AI Hazard because the use of AI for disinformation could plausibly lead to significant harm, including social disruption and erosion of trust, even if such harm is not explicitly confirmed as having occurred yet.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي لنشر معلومات مضللة

2026-03-16
الرأي
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of generating misinformation, which is a recognized form of harm to communities. However, the claims are presented as accusations without confirmed evidence or demonstrated impact. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to harm through AI-enabled disinformation, but no direct or indirect harm has been established yet. The article mainly reports on the potential misuse of AI for spreading false information rather than a confirmed AI Incident.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي لنشر معلومات مضللة

2026-03-16
جريدة الرؤية العمانية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran to create misleading content, such as fake images and false narratives, which fits the definition of an AI system's use for misinformation. While the harms described (misinformation, distortion of public opinion) align with harm to communities, the article does not confirm that these AI-generated falsehoods have caused actual harm or incidents. The claims are accusations without confirmed incidents of harm, making this a plausible risk of harm rather than a realized harm. Hence, it is best classified as an AI Hazard, reflecting the credible potential for AI-driven misinformation to cause harm if it materializes.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي لنشر معلومات مضللة - وكالة ستيب نيوز

2026-03-16
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as tools for generating misinformation, which is a recognized AI-related risk. However, the article describes accusations and concerns about AI misuse rather than confirmed incidents of harm caused by AI-generated misinformation. There is no clear evidence of realized harm or direct causation of harm from AI use in this context, only plausible potential harm through misinformation campaigns. Therefore, this situation fits the definition of an AI Hazard, as the use of AI for misinformation could plausibly lead to harm but no concrete incident is documented here.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي لنشر معلومات مضللة

2026-03-16
اخبار اليمن الان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used as a tool for disinformation by Iran, which is a credible risk of harm to communities and political stability. However, no direct or indirect harm from AI misuse is documented as having occurred yet. The event is primarily about warnings and accusations regarding potential AI misuse for spreading false information, fitting the definition of an AI Hazard rather than an Incident. There is no indication that this is a response or update to a prior incident, so it is not Complementary Information. It is clearly related to AI, so it is not Unrelated.
Thumbnail Image

ترامب يثير الجدل مجددًا بتصريحات حول استخدام إيران لسلاح جديد في سياق الحرب المستمرة وتأثيراته المحتملة - الخبر الجديد

2026-03-16
الخبر الجديد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate misleading or fake content (images and videos) by Iran, which is alleged to have influenced public opinion and media narratives. This constitutes harm to communities through misinformation and manipulation, fitting the definition of an AI Incident. The involvement of AI systems in creating and disseminating false information is direct, and the harms are realized or ongoing, not merely potential. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ترامب يتهم إيران باستخدام الذكاء الاصطناعي لنشر أخبار مضللة

2026-03-16
وكالة النبا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran to generate false images and narratives that mislead the public and distort facts about military actions and political events. This constitutes an AI system's use leading to harm to communities through misinformation and disinformation. Since the harm is occurring or has occurred (false narratives spreading), this qualifies as an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal in creating and disseminating the false content causing harm.
Thumbnail Image

Donald Trump: İran, yalan haber medyası ile yakın iş birliği içinde - ensonhaber.com

2026-03-16
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used by Iran to generate false videos and news as part of a disinformation campaign. This use of AI directly leads to harm by spreading false information that misleads the public and damages trust, which fits the definition of harm to communities and violations of rights. The harm is described as ongoing and realized, not merely potential. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through misinformation.
Thumbnail Image

Trump'tan İran'a yapay zeka suçlaması

2026-03-16
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used by Iran as a disinformation tool, which implies the involvement of AI systems in generating fake content. The nature of involvement is the alleged use of AI for misinformation (use). However, the article does not confirm that this AI-generated disinformation has directly led to harm such as social disruption or violations of rights; it is primarily a political accusation and warning. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to harm through misinformation campaigns but no concrete incident is described.
Thumbnail Image

Trump'ı sinirden kö

2026-03-16
Haberler
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of AI-generated fake videos and misinformation used as propaganda in a war context. While AI systems are reasonably inferred to be involved in generating these fake images and videos, the article does not describe any actual harm resulting from these AI outputs. The harms described (propaganda, misinformation) are potential and plausible but not confirmed as having caused injury, disruption, or rights violations yet. Hence, the event fits the definition of an AI Hazard, as the AI-generated misinformation could plausibly lead to harm, but no incident has occurred yet.
Thumbnail Image

Trump'tan İran'a Yapay Zeka İthamı - Son Dakika

2026-03-16
Son Dakika
Why's our monitor labelling this an incident or hazard?
The article centers on Trump's claims that Iran uses AI to produce deceptive media content for disinformation purposes. While AI involvement is reasonably inferred (AI-generated fake images/videos), the article does not confirm actual harm or incidents resulting from this use. The focus is on the potential misuse of AI for disinformation campaigns, which could plausibly lead to harm such as misinformation and social disruption, but no direct or indirect harm is documented as having occurred. Thus, the event fits the definition of an AI Hazard, reflecting a credible risk of AI-enabled disinformation causing harm in the future.
Thumbnail Image

Trump'ı küplere bindiren görüntü: Yapay zeka kullanarak üretiyorlar - Son Dakika

2026-03-16
Son Dakika
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to create fake videos and misinformation that are spreading on social media and some media outlets. This use of AI-generated content is directly linked to misinformation and propaganda in a war context, which can harm communities by spreading false information and undermining trust. The harm is realized as the content is already circulating and influencing public perception. Therefore, this event meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through misinformation and propaganda.
Thumbnail Image

Trump'tan İran'a yapay zeka suçlaması

2026-03-16
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos used in a disinformation campaign, which involves an AI system's use. While the harm (misinformation and its effects) is plausible and consistent with known AI risks, the article does not confirm that this has directly caused harm yet. The focus is on the potential for harm through media manipulation and propaganda. Hence, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Washington'da Trump, İran'ın yapay zeka kullanarak dezenformasyon yaptığı iddiasında bulundu.

2026-03-16
Mersin Haber
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of AI-enabled disinformation by Iran, which if true, could cause harm to communities through misinformation. However, the article does not provide evidence that such AI-driven disinformation has directly or indirectly caused harm yet. The claims remain unverified and are presented as accusations by a political figure. This fits the definition of an AI Hazard, as the use of AI for disinformation could plausibly lead to harm, but no confirmed incident is described. There is no indication of a response, remediation, or broader governance context that would classify this as Complementary Information, nor is it unrelated to AI. Hence, the classification is AI Hazard.
Thumbnail Image

Trump'tan İran'a yapay zeka suçlaması

2026-03-16
Yeniçağ Gazetesi
Why's our monitor labelling this an incident or hazard?
The article centers on allegations that AI is being used by Iran to produce deceptive media content as part of disinformation campaigns. While AI involvement is reasonably inferred (AI-generated fake videos and misinformation), the article does not report any concrete harm or incident resulting from this use. The harms described are potential and speculative, focusing on the risk of misinformation and media manipulation. Hence, this fits the definition of an AI Hazard, as the use of AI could plausibly lead to harm through disinformation, but no direct or indirect harm has been established in the article.
Thumbnail Image

ABD Başkanı Trump: "İran Yapay Zekayla Haber Üretiyor"

2026-03-17
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article describes allegations that AI is used to create fake war footage, which if true, could cause harm to communities through misinformation and social disruption. However, the article does not confirm that such AI-generated content has actually caused harm or been widely disseminated. The focus is on the claim and the political response, not on a verified incident. Thus, this is best classified as an AI Hazard, reflecting a plausible risk of harm from AI-generated misinformation, rather than an AI Incident or Complementary Information.
Thumbnail Image

Trump Tuduh Iran Gunakan AI untuk Sebar Disinformasi

2026-03-16
IDN Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation as a tool allegedly used by Iran, which fits the definition of an AI system's use in spreading false information. However, the harm described (disinformation) is alleged and not confirmed as having directly caused harm yet. The verified attack causing death is unrelated to AI. Since the AI-related disinformation could plausibly lead to harm (social and informational harm), but no direct or indirect harm from AI is confirmed, this event is best classified as an AI Hazard.
Thumbnail Image

Trump Tuding Iran Manfaatkan AI untuk Hoaks Perang, Sebut 3 Contohnya

2026-03-16
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article focuses on accusations of AI misuse for disinformation by Iran, but does not provide evidence that AI-generated false content has been definitively used or caused harm. The claims remain unproven and are presented as allegations by Trump. This fits the definition of an AI Hazard, as it plausibly could lead to harm if such AI-generated disinformation campaigns were real and effective, but no confirmed AI Incident is described. The presence of AI is reasonably inferred from the mention of AI-generated images and videos used for disinformation. Since harm is not confirmed but the risk is credible, the classification is AI Hazard.
Thumbnail Image

Trump Omon-omon Senjata Super Canggih Iran, Begini Fakta Sebenarnya

2026-03-16
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran as a disinformation tool, which is an AI system involvement in the use phase. However, the claims are unproven allegations by Trump without independent verification. The confirmed harms (infrastructure attacks, airport disruptions) are not directly linked to AI use but to physical attacks. The AI-related disinformation could plausibly lead to harm such as misinformation-induced social or political disruption, but the article does not confirm that such harm has occurred. Thus, it fits the definition of an AI Hazard rather than an AI Incident. The article also includes political and media context but does not focus on responses or governance developments, so it is not Complementary Information. It is not unrelated because AI involvement is central to the claims discussed.
Thumbnail Image

Trump Uring-uringan soal Iran, Ancam Tuntut Media-Sebut Tak Patriotik

2026-03-16
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated media used as propaganda, which is an AI system's use. The harm described is potential misinformation and social/political harm that could arise from such AI-generated content. Since no direct harm or incident is confirmed, but the use of AI-generated content for propaganda could plausibly lead to significant harm, this fits the definition of an AI Hazard rather than an AI Incident. The political accusations and media criticism are reactions to this potential harm, not evidence of realized harm caused by the AI system itself.
Thumbnail Image

Trump Tuding Iran Gunakan AI untuk Sebar Disinformasi

2026-03-16
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by Iran to create false images and narratives, which fits the definition of an AI system generating content that could influence virtual environments (public opinion). While the harm (disinformation) is implied and plausible, there is no clear evidence presented that this disinformation has directly caused harm or disruption yet. Therefore, this situation represents a credible risk of harm due to AI use, qualifying it as an AI Hazard rather than an AI Incident. The article does not focus on responses, mitigation, or broader ecosystem context, so it is not Complementary Information. It is not unrelated because AI involvement is central to the claims.
Thumbnail Image

Trump Tuduh Iran Gunakan Kecerdasan Buatan untuk Menyebarkan Disinformasi

2026-03-16
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article centers on allegations by a political figure that AI is being used by Iran to create disinformation, which if true, could harm communities by spreading false information. However, the article does not confirm that AI-generated disinformation has been definitively identified or caused harm. The AI involvement is alleged and the harm is potential rather than realized. Hence, this fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities through disinformation, but no direct or indirect harm is confirmed yet.
Thumbnail Image

Доналд Тръмп взе медии под прицел - обвини ги в AI фалшификации

2026-03-16
Vesti.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake content being used by media, which involves AI systems. However, it does not document actual harm occurring from these AI-generated fakes, only accusations and concerns about their use. The potential for AI-generated misinformation to cause harm to communities and public discourse is credible, making this an AI Hazard. There is no indication of a concrete incident or direct harm having occurred, nor is the article primarily about responses or governance measures, so it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

Доналд Тръмп взе медии под прицел заради начина, по който отразяват войната в Иран

2026-03-16
Българска Телеграфна Агенция
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly as tools used to create fake images and videos disseminated by media, leading to misinformation about a war, which harms communities by spreading false narratives. The harm is realized and ongoing, not just potential. Therefore, it meets the criteria for an AI Incident due to the direct role of AI-generated content in causing harm through disinformation.
Thumbnail Image

Тръмп взе медии под прицел заради отразяването на войната в Иран

2026-03-16
Blitz.bg
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of AI-generated disinformation and the political and regulatory reactions to it, which fits the definition of Complementary Information. It does not report a concrete AI Incident causing direct or indirect harm, nor does it describe a specific AI Hazard event with plausible future harm. The AI involvement is in the context of misinformation spread, but the article's main focus is on the societal and governance response, including license reviews and public accusations. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

ترامپ: ایران تنها در جنگ‌هایی که با هوش مصنوعی می‌سازد پیروز می‌شود

2026-03-16
بالاترین
Why's our monitor labelling this an incident or hazard?
The article involves AI in the context of misinformation and media manipulation allegedly conducted by Iran using AI. However, it does not describe a specific AI system causing direct or indirect harm, nor does it report an actual incident where AI use led to harm. The statements are claims and warnings about AI's role in misinformation, which could plausibly lead to harm but no harm is confirmed or detailed. Hence, this fits the definition of an AI Hazard, as it highlights a credible risk of AI-enabled misinformation causing harm in the future, but no incident has occurred yet.
Thumbnail Image

ترامپ در تروث‌سوشال: جمهوری‌اسلامی فقط در تصاویر هوش مصنوعی جنگ را می‌برد

2026-03-16
IranWire | خانه
Why's our monitor labelling this an incident or hazard?
The article centers on allegations that AI is used by the Iranian government to produce fake war images and news, which constitutes misinformation. This fits the definition of an AI system being used to generate content that could harm communities by spreading false information. However, since the article reports these as claims without verified evidence of actual harm or direct consequences, it does not document a confirmed AI Incident. Instead, it points to a plausible risk of harm from AI-generated misinformation, aligning more with an AI Hazard. The focus is on the potential and ongoing misuse rather than a documented incident with confirmed harm.
Thumbnail Image

ترامپ: گزارش آسیب به هواپیماهای سوخت‌رسان آمریکا نادرست است پیروزی‌های جعلی جمهوری اسلامی در عالم هوش مصنوعی است

2026-03-16
صدای آمریکا فارسی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Islamic Republic of Iran to create and spread disinformation, which fits the definition of an AI system being used in a harmful way. However, the report focuses on accusations and claims about the use of AI for propaganda and misinformation without detailing any specific incident where this has directly or indirectly caused harm. The potential for harm through misinformation campaigns is credible and plausible, especially given the context of military and political conflict, but the article does not document actual harm or incidents. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ادعای عجیب ترامپ: تصاویر حملات ایران به ما ساخته هوش مصنوعی است!

2026-03-18
noandish.com
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of AI-generated misinformation being used by Iran to fabricate images and narratives about military attacks. While AI systems are implicated in the creation of false content, the article does not document actual realized harm resulting from these AI-generated materials. The focus is on the potential for AI to be weaponized for disinformation, which constitutes a plausible risk rather than a confirmed incident. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to harm (e.g., misinformation causing social or political disruption), but no direct or indirect harm has been established in the article.