AI-Generated Deepfakes Fuel Disinformation in Iran-Israel Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During the Iran-Israel conflict, AI-generated deepfakes, chatbot-produced fake news, and manipulated video game footage have been widely spread online, falsely depicting war events and causing significant public misinformation. Fact-checkers confirmed these materials as fabricated, highlighting the harm caused by AI-driven disinformation campaigns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly mentioned as generating deepfake videos and images used in misinformation campaigns. The use of these AI systems has directly led to harm to communities by spreading false narratives and fabricated content about the conflict, which can influence public opinion and social stability. The harm is realized, not just potential, as the misinformation has been widely spread and verified as false by fact-checkers. Hence, it meets the criteria for an AI Incident under the framework, specifically harm to communities through misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

بعد الطائرات والصواريخ..حرب المعلومات المضللة بين إيران وإسرائيل

2025-06-21
24.ae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating deepfake videos and images used in misinformation campaigns. The use of these AI systems has directly led to harm to communities by spreading false narratives and fabricated content about the conflict, which can influence public opinion and social stability. The harm is realized, not just potential, as the misinformation has been widely spread and verified as false by fact-checkers. Hence, it meets the criteria for an AI Incident under the framework, specifically harm to communities through misinformation.
Thumbnail Image

‫ "التضليل الإعلامي" في قلب الحرب بين إيران وإسرائيل

2025-06-22
جريدة الشرق
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and AI chatbots producing false content that is widely disseminated, causing misinformation in a conflict zone. This misinformation harms communities by distorting reality and potentially escalating tensions. The AI systems' outputs directly lead to this harm, meeting the definition of an AI Incident. The event is not merely a potential risk or a complementary update but a clear case of AI-generated misinformation causing real-world harm.
Thumbnail Image

الحرب الإيرانية الإسرائيلية تغذي سيلا من المعلومات المضللة

2025-06-21
SWI swissinfo.ch
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models and deepfake technologies) to produce and spread false information and fabricated videos about a real conflict. This misinformation has already been disseminated widely, causing harm to communities by spreading false narratives and potentially escalating tensions. The AI system's use in generating and distributing these false contents directly leads to harm as defined by the framework (harm to communities). Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

منشورات مولدة بالذكاء الصناعي.. كيف تغذي الحرب الإيرانية الإسرائيلية سيل المعلومات المضللة؟

2025-06-21
Alwasat News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating deepfake videos and false content used in misinformation campaigns related to a geopolitical conflict. The harm is realized as the misinformation is actively spreading and influencing public understanding, which qualifies as harm to communities. Therefore, this is an AI Incident because the AI systems' use has directly led to significant harm through misinformation and disinformation in a conflict setting.
Thumbnail Image

التضليل الإعلامي سلاح بلا ذخيرة يغذي الحرب الإيرانية-الإسرائيلية | | صحيفة العرب

2025-06-21
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake videos and fake news generation) that have directly led to harm by spreading false information about a conflict, misleading populations, and undermining truthful public discourse. This constitutes harm to communities and a violation of rights to accurate information, fitting the definition of an AI Incident. The article documents actual ongoing harm rather than just potential risk or complementary information about AI.
Thumbnail Image

المواجهة الإيرانية الإسرائيلية تُشعل موجة من الأخبار الزائفة

2025-06-21
annahar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI, deepfake generators, AI chatbots) to create and spread false videos and misinformation about the Iran-Israel conflict. This misinformation is actively disseminated and has caused harm by misleading populations, which constitutes harm to communities. The AI systems' outputs are directly linked to the spread of false information, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation is widespread and verified as false by fact-checkers. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

حرب المعلومات المضللة في الصراع الإيراني الإسرائيلي | صحيفة الخليج

2025-06-21
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes and AI chatbots to produce and spread false information that has been widely circulated, causing harm to communities by misleading them about real-world events. The harm is realized and ongoing, not just potential. The AI systems' development and use have directly contributed to this misinformation campaign, fulfilling the criteria for an AI Incident under the harm to communities category.
Thumbnail Image

صراع إيران وإسرائيل يشعل معركة التضليل الإعلامي | MEO

2025-06-21
MEO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies such as deepfake generation and AI chatbots to create and spread false content about the conflict, which is actively misleading people. This constitutes harm to communities through misinformation and disinformation, fulfilling the criteria for an AI Incident. The AI systems' use directly leads to the harm by producing and disseminating fabricated content that affects public understanding and social stability. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Iran-Israel fighting distorted by tech-fuelled misinformation

2025-06-21
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfakes, generative AI video generators, and chatbot-generated falsehoods being used to spread misinformation about a real conflict. This misinformation is actively causing harm by misleading populations, fueling divisive narratives, and eroding trust in digital content, which is a form of harm to communities. The AI systems' use in generating and spreading false content is a direct factor in this harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tech-Fueled Misinformation Distorts Iran-Israel Fighting

2025-06-21
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes, generative AI videos, and chatbot-generated falsehoods being used to spread misinformation about a real conflict. This misinformation is actively causing harm by misleading populations, fueling divisive narratives, and eroding trust in information, which is a form of harm to communities. The AI systems' use in generating and spreading this false content is central to the incident. Hence, the event meets the criteria for an AI Incident due to realized harm caused by AI system use.
Thumbnail Image

Tech-fuelled misinformation distorts Iran-Israel fighting

2025-06-21
GEO TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfakes, generative AI video content, and chatbot-generated falsehoods being used to spread misinformation about the Iran-Israel conflict. This misinformation is actively shared and believed, causing harm to communities by distorting public understanding and fueling conflict narratives. The AI systems' use in creating and amplifying false content directly leads to these harms. The event meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities through misinformation and disinformation.
Thumbnail Image

Tech-driven misinformation skews events of Iran-Israel conflict

2025-06-21
The News International
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos and false content that are actively spreading misinformation about a conflict, which is causing harm to communities by misleading the public and fueling disinformation. The AI-generated content is directly responsible for the harm described, meeting the criteria for an AI Incident. The article details realized harm rather than potential harm, and the AI's role is pivotal in the misinformation campaign.
Thumbnail Image

Tech-fueled misinformation distorts Iran-Israel fighting

2025-06-21
The Peninsula
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfakes, generative AI video content, and chatbot-generated falsehoods being used to spread misinformation about a real conflict. This misinformation is actively causing harm by misleading the public, fueling divisive narratives, and undermining trust in information, which qualifies as harm to communities. The AI systems' outputs are central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content.
Thumbnail Image

Tech-fueled misinformation distorts Iran-Israel fighting

2025-06-21
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfakes, generative AI video generators, and chatbot-generated falsehoods being used to produce and spread misinformation about a real-world conflict. This misinformation is actively causing harm by misleading populations, fueling divisive narratives, and eroding trust in digital content, which fits the definition of harm to communities. The AI systems' use in generating and amplifying false content directly leads to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and realized.
Thumbnail Image

Tech-fueled misinformation distorts Iran-Israel fighting

2025-06-21
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and images being used to spread false information about the Iran-Israel conflict, with examples of deepfakes and fabricated content widely shared on social media platforms. This misinformation has caused harm by misleading the public, sowing confusion, and potentially escalating tensions, which fits the definition of harm to communities. The AI systems' use in generating and distributing this content is direct and pivotal to the harm described. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

以伊战事硝烟弥漫 虚假新闻铺天盖地

2025-06-21
早报
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake videos and images that are being actively spread on social media, causing misinformation and manipulation of public perception. This directly leads to harm to communities by spreading false narratives and destabilizing social trust during a conflict. Therefore, it meets the criteria of an AI Incident due to realized harm caused by AI-generated misinformation in a conflict context.
Thumbnail Image

事实核查:以伊冲突中的假新闻

2025-06-19
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used to spread false information about a conflict, which misleads the public and harms communities by spreading misinformation. This constitutes harm to communities, fulfilling the criteria for an AI Incident. Additionally, the AI chatbot's incorrect fact-checking contributes to misinformation, showing AI system malfunction or misuse. The presence and use of AI systems are clear, and the harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

核查:以色列掀起向伊朗道歉的游行活动?

2025-06-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the videos circulating about an apology march in Israel were created using the AI tool Veo and are not real. The AI system's involvement is in the generation of false video content. While this could plausibly lead to harm by spreading misinformation and misleading the public, the article does not report any actual harm or incidents resulting from these videos. The main focus is on clarifying the misinformation and providing context, which aligns with Complementary Information rather than an Incident or Hazard. The AI system's role is pivotal in the misinformation, but since no harm has been realized or directly linked, it does not qualify as an AI Incident. It is also not purely unrelated, as AI-generated content is central to the discussion.
Thumbnail Image

伊朗把以色列机场和特拉维夫大楼夷为平地了?这些视频系AI制作或迪拜火灾

2025-06-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the creation of misleading videos that falsely depict destruction of critical infrastructure (Israel's airport and buildings). However, the AI-generated content itself did not cause actual physical harm or destruction; rather, it is misinformation. The real missile attacks causing damage are separate from the AI-generated videos. Since the AI-generated videos have not directly or indirectly caused injury, property damage, or rights violations, but represent a potential for misinformation harm, this qualifies as Complementary Information. The article primarily provides fact-checking and context about AI-generated misinformation related to an ongoing conflict, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

明查|伊朗把以色列机场和特拉维夫大楼夷为平地了?这些视频系AI制作或迪拜火灾_手机网易网

2025-06-22
m.163.com
Why's our monitor labelling this an incident or hazard?
The article discusses AI-generated videos falsely depicting damage to Israeli infrastructure, clarifying misinformation. It involves AI systems in the creation of misleading content but does not describe any harm caused by AI systems themselves. The actual missile attacks causing harm are conventional military actions, not AI-driven incidents. Hence, the AI involvement is in misinformation generation, and the article's main focus is on fact-checking and clarifying the role of AI in spreading false videos. This fits the definition of Complementary Information, as it provides supporting context and updates about AI-generated misinformation without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Iran-isreal : AI उकसा रहा तनाव! नकली वीडियो और गेम फुटेज को इन कामों के लिए किया जा रहा इस्तेमाल

2025-06-22
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos and misinformation that have been widely disseminated, causing harm to communities by misleading public opinion and escalating conflict tensions. The article details realized harm through the spread of false narratives and the resulting social disruption. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights through misinformation. The presence of AI-generated content and its harmful impact is clear and ongoing, not merely a potential risk or complementary information.
Thumbnail Image

लेख: कानून के कटघरे में कितना भरोसेमंद है आर्टिफिशल इंटेलिजेंस

2025-06-25
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The article does not report a concrete AI Incident or AI Hazard but rather discusses the broader legal and forensic ecosystem's response to AI challenges, including new laws and forensic practices. It focuses on the evolving understanding and governance of AI-generated evidence, which fits the definition of Complementary Information as it enhances understanding and tracks societal and governance responses to AI without describing a specific harm or plausible harm event.
Thumbnail Image

क्या आप वाकई जो देख रहे हैं उस पर भरोसा कर सकते हैं? जानें सोशल मीडिया पर कैसे छा रही हैं फर्जी AI वीडियोज

2025-06-22
hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake videos that are widely shared and believed, causing harm to communities by spreading misinformation and undermining trust in authentic information sources. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d). The article highlights real, ongoing harm rather than just potential risk, distinguishing it from an AI Hazard. It is not merely complementary information since the main focus is on the harm caused by AI-generated fake videos, not on responses or updates to prior incidents.
Thumbnail Image

Deezer की नई पहल: अब AI से बना म्यूजिक होगा फ्लैग, स्ट्रीमिंग धोखाधड़ी पर लगेगी लगाम | deezer to flag AI generated music combat streaming fraud | Hari Bhoomi

2025-06-24
हरिभूमि
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems both in the generation of music and in the detection of AI-generated content. The fraudulent streaming by bots to claim royalties constitutes a violation of intellectual property rights and financial harm to artists and the music industry, which is a clear harm under the AI Incident definition. Deezer's initiative to flag AI-generated music is a response to this ongoing harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations and harm, and the article describes ongoing harm and mitigation efforts.
Thumbnail Image

Fact Check: ईरान से युद्ध रोकने की अपील करते इजरायलियों का यह वीडियो AI जेनरेटेड है

2025-06-24
Vishvas News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a deepfake video that falsely depicts a politically sensitive scenario. The AI-generated video is being spread on social media, which can mislead communities and disrupt social trust, constituting harm to communities. Since the AI system's use has directly led to misinformation and potential social harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Clip of 'burning Mossad building' made with AI

2025-06-24
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated video falsely showing the destruction of a building, which is misinformation. While the AI system is used to create deceptive content, there is no indication that this has directly or indirectly caused harm such as injury, disruption, rights violations, or property damage. The misinformation could potentially lead to harm, but the article focuses on the identification and debunking of the AI-generated falsehood rather than any realized harm. Therefore, this is best classified as Complementary Information, providing context on AI-generated misinformation and its detection, rather than an AI Incident or Hazard.
Thumbnail Image

Clip of 'burning Mossad building' made with AI

2025-06-24
Fact Check
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated video clip falsely showing a major attack, which is misinformation created using generative AI. While the AI system's use here is central to the event, the harm is potential or indirect, related to misinformation and its societal impact. Since the article does not report actual harm caused by this misinformation (e.g., violence, injury, or rights violations), nor does it describe a credible imminent risk of harm, it does not meet the threshold for an AI Incident or AI Hazard. Instead, it provides context on the misuse of AI-generated content and the challenges of misinformation, which aligns with Complementary Information about AI's societal impact and governance challenges.
Thumbnail Image

تقرير أمريكى: إسرائيل وإيران تدخلان عصرًا جديدًا من الحرب النفسية والتضليل الرقمى

2025-07-15
Dostor
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating misleading content and conducting psychological operations that have directly led to harm in the form of misinformation and manipulation of public opinion during a conflict, which affects communities and potentially violates rights to accurate information. The AI's role is pivotal in enabling the scale and sophistication of the disinformation campaigns. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-enabled misinformation and psychological warfare.
Thumbnail Image

NYT: "إسرائيل" وإيران تدشنان عصرا جديدا من الحرب النفسية... - عربي21

2025-07-15
عربي21
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating and spreading disinformation and psychological operations during an ongoing conflict, which has directly led to harm in the form of misinformation, manipulation of public perception, and potential destabilization of societies. The article explicitly mentions AI-generated videos and messages used as part of the campaigns, indicating AI system involvement in the use phase. The harms include violations of rights to truthful information and harm to communities through psychological manipulation and social disruption. Therefore, this qualifies as an AI Incident.
Thumbnail Image

حرب بلا رصاص.. معركة إسرائيل وإيران في الفضاء الرقمي

2025-07-15
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to generate fake videos and synthetic voices as part of a coordinated disinformation campaign during a conflict. The AI systems' outputs were used to deceive millions, including media outlets, and to manipulate public opinion, which constitutes harm to communities and a violation of rights to truthful information. The harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident, as the AI system's use directly led to significant societal harm through misinformation and psychological warfare.
Thumbnail Image

بين إيران وإسرائيل.. عصر جديد من الحرب النفسية

2025-07-15
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate and spread false narratives and videos as part of a coordinated disinformation campaign during an armed conflict. The AI-generated content has directly influenced public perception and trust, causing harm to communities by spreading deception and psychological manipulation. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (harm to communities through misinformation and psychological warfare). The article does not merely warn of potential harm but documents actual AI-enabled disinformation causing real-world effects.
Thumbnail Image

إسرائيل وإيران تُدشنان عصراً جديداً من الحرب النفسية

2025-07-16
Al Joumhouria
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in generating fake videos, audio messages, and social media posts as part of psychological warfare between Israel and Iran. These AI systems were used to create and disseminate misinformation that influenced public perception during an active conflict that resulted in casualties and regional disruption. This constitutes an AI Incident because the AI systems' use directly contributed to harm to communities and violations of rights through misinformation and manipulation in a conflict context. The harm is realized, not just potential, and the AI role is pivotal in the scale and sophistication of the disinformation campaigns.
Thumbnail Image

الذكاء الاصطناعي يدخل ساحة المعركة.. إسرائيل وإيران تشعلان "حرب التضليل" - شفق نيوز

2025-07-15
Shafaq News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate and spread disinformation and propaganda on social media platforms, which has directly caused harm to communities by spreading false narratives and undermining trust. The AI's role is pivotal in creating realistic fake content and automating the spread of misinformation, which constitutes a violation of rights and harm to communities. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm through disinformation in an active conflict context.
Thumbnail Image

اخبارك نت | حرب بلا رصاص.. معركة إسرائيل وإيران في الفضاء الرقمي

2025-07-15
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos, AI-generated voices, and fabricated content as part of a coordinated psychological warfare campaign between two nations. The AI systems' outputs have directly led to harm by misleading millions, causing confusion, and manipulating public opinion during an ongoing conflict. This meets the criteria for an AI Incident because the AI's use has directly caused harm to communities and social stability. The article details realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the active use of AI-generated disinformation causing harm, not on responses or updates. Therefore, the classification is AI Incident.