AI-Generated Satellite Images Used for Misinformation in Iran Conflict

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI was used to create fake satellite images depicting destroyed U.S. military bases in Qatar, which were widely circulated by Iranian media as real. These AI-generated images misled the public and stakeholders during the U.S.-Israeli conflict with Iran, raising concerns about the security and societal impact of AI-driven misinformation in warfare.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of generative AI systems to create fake satellite images that have been disseminated as real, misleading the public and stakeholders about military events. This misinformation can have serious consequences, including influencing public opinion, escalating tensions, and undermining trust in factual information, which constitutes harm to communities. The AI system's role is pivotal in fabricating these images, and the harm is realized as the images have been widely viewed and believed. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

صور الأقمار الاصطناعية المولّدة بالذكاء الاصطناعي أداة تضليل في حرب إيران

2026-03-09
France 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create fake satellite images that have been disseminated as real, misleading the public and stakeholders about military events. This misinformation can have serious consequences, including influencing public opinion, escalating tensions, and undermining trust in factual information, which constitutes harm to communities. The AI system's role is pivotal in fabricating these images, and the harm is realized as the images have been widely viewed and believed. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

صور الأقمار الاصطناعية المولّدة بالذكاء الاصطناعي أداة تضليل في حرب إيران

2026-03-09
annahar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative models to produce fake satellite images that have been deliberately spread to mislead and manipulate public perception during an active conflict. The AI system's outputs have directly contributed to misinformation campaigns that have real-world security and societal implications, fulfilling the criteria for harm to communities. The article provides concrete examples of such images being widely viewed and believed, indicating realized harm rather than just potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

صور الأقمار الاصطناعية أداة تضليل في الحروب

2026-03-10
Al-Madina Newspaper - جريدة المدينة
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating or modifying satellite images to create false information that is disseminated widely, leading to misinformation that influences public opinion and decision-making. This misinformation constitutes harm to communities and societal stability, fitting the definition of an AI Incident. The article reports actual occurrences of such AI-generated misinformation being used in recent conflicts, confirming realized harm rather than just potential risk. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

صور الأقمار الاصطناعية المولّدة بالذكاء الاصطناعي أداة تضليل في حرب إيران

2026-03-10
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) to create fake satellite images that have been deliberately spread to mislead and manipulate public perception during a conflict. This misinformation has already caused harm by deceiving millions of viewers and undermining trust in real information, which qualifies as harm to communities. Therefore, this is an AI Incident because the AI system's use has directly led to significant harm through misinformation in a conflict setting.
Thumbnail Image

الصور المولّدة بالذكاء الصناعي أداة تضليل في حرب إيران

2026-03-09
Alwasat News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and disseminate fake satellite images that have been accepted as real by many, causing misinformation and potential harm to communities and security. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation impacting public understanding and possibly influencing conflict dynamics. The article details realized harm rather than just potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

هل أصبحت صور الأقمار الاصطناعية المولدة بالذكاء الاصطناعي أداة تضليل في حرب إيران؟ - تيل كيل عربي

2026-03-11
تيل كيل عربي
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for satellite image fabrication) to create and spread false images that mislead the public and stakeholders about military events. This misinformation has already occurred and has caused harm by distorting reality and potentially influencing opinions and decisions in a conflict setting, which qualifies as harm to communities. Therefore, this is an AI Incident because the AI system's use has directly led to significant harm through misinformation in a geopolitical conflict.
Thumbnail Image

الذكاء الاصطناعي... أداة تضليل في حرب إيران

2026-03-09
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce fake satellite images that are deliberately spread to mislead and manipulate perceptions in a war context. This use of AI has directly caused harm by enabling misinformation campaigns that affect public understanding and could influence conflict dynamics, which constitutes harm to communities. The article provides concrete examples of AI-generated images being widely disseminated and believed, demonstrating realized harm rather than just potential risk. Hence, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm through misinformation in a conflict setting.
Thumbnail Image

Des images satellites truquées par l'IA alimentent la désinformation sur la guerre États-Unis-Iran

2026-03-09
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images used to mislead the public and propagate disinformation during an ongoing war. The AI system's outputs have directly caused harm by misleading people and influencing opinions and decisions, which aligns with harm to communities and security implications. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite imagery used by a news outlet to spread disinformation about a US base, which is a direct use of AI systems to create misleading content. The harm is realized as the disinformation can influence public perception and geopolitical stability, which qualifies as harm to communities and security. Therefore, this event meets the criteria for an AI Incident due to the direct role of AI in causing harm through disinformation.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
Head Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images used to mislead the public and influence perceptions during an active conflict. The AI system's outputs have directly caused harm by spreading false information that can affect public opinion, security decisions, and market behavior. This meets the definition of an AI Incident as the AI system's use has directly led to harm to communities through disinformation during wartime.
Thumbnail Image

Bases américaines détruites, leurres pris pour cible: comment des images satellites truquées par l'IA alimentent la désinformation sur la guerre au Moyen-Orient

2026-03-09
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated satellite images used to spread false information about military actions, which is explicitly stated. The use of AI-generated content has directly led to harm by misleading the public and potentially influencing opinions and decisions related to a major conflict, which qualifies as harm to communities. The article details actual dissemination and impact, not just potential risk, so it is an AI Incident rather than a hazard or complementary information. The harm is indirect but real, as the AI-generated images fuel disinformation with significant societal consequences.
Thumbnail Image

AI-Generated Fake Satellite Images Fuel Misinformation In US-Iran War

2026-03-09
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake satellite images that are used to spread misinformation during wartime, which constitutes harm to communities through disinformation. The AI-generated images have been widely disseminated and have real-world impacts, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation is actively influencing public perception and possibly security decisions. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through misinformation.
Thumbnail Image

AI fakes flood Israel-Iran war feeds: Fake satellite images, videos spread amid West Asia conflict

2026-03-09
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI) being used to create fabricated satellite images and synthetic videos that are widely disseminated and believed, causing misinformation and harm to communities and political processes. The harm is realized, not just potential, as the false content has influenced public perception and narratives during an ongoing conflict. This fits the definition of an AI Incident because the AI system's use directly leads to harm to communities through disinformation. The involvement is in the use of AI systems to generate false content, which is spreading and causing harm. Hence, the classification is AI Incident.
Thumbnail Image

Des images satellites truquées par l'IA alimentent la désinformation sur la guerre États-Unis - Iran

2026-03-09
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images used to mislead the public about military events, which have been widely disseminated and have real-world implications for security and public opinion. The AI system's use in creating these images directly contributes to the harm of disinformation and manipulation, fitting the definition of an AI Incident. The harm is realized (not just potential), as the images have accumulated millions of views and influence narratives about the conflict.
Thumbnail Image

Fake News! Iran's "Destroyed" U.S. $1.1B Radar in Qatar Image Was AI-Manipulated From Old Bahrain Photo

2026-03-09
Latest Asian, Middle-East, EurAsian, Indian News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated satellite images that are used in a disinformation campaign. This use of AI directly leads to harm by spreading false information that can influence public opinion, affect financial markets, and exacerbate conflict situations, which constitutes harm to communities. The article provides concrete examples of AI-generated fake images being widely disseminated and the real-world implications of such misinformation. Hence, it meets the criteria for an AI Incident due to the realized harm caused by AI-generated disinformation.
Thumbnail Image

Guerre au Moyen-Orient: Une base américaine dévastée? C'était un trucage avec l'IA

2026-03-09
Le Matin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative technology used to create fake satellite images that were widely spread and believed, causing misinformation during an active conflict. This misinformation can influence public opinion and decision-making, which is a form of harm to communities and security. The AI system's use in generating these images is central to the event and the harm caused. Hence, this qualifies as an AI Incident due to realized harm from AI-generated disinformation affecting societal and security aspects.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images being disseminated widely, causing misinformation during a war. This misinformation can influence public opinion and decision-making, which constitutes harm to communities and potentially broader security harms. The AI system's use in fabricating these images is central to the incident, fulfilling the criteria for an AI Incident due to realized harm from AI-enabled disinformation.
Thumbnail Image

Fake AI Satellite Imagery Spurs US-Iran War Disinformation

2026-03-09
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI being used to fabricate satellite images that are disseminated widely and believed by many, constituting misinformation and disinformation. This manipulation directly harms communities by distorting information during wartime, which is a form of harm to communities and potentially a violation of rights to accurate information. The AI system's use in generating these images is central to the harm described, fulfilling the criteria for an AI Incident due to realized harm from AI-generated disinformation in a conflict context.
Thumbnail Image

Guerre USA-Israël-Iran: de fausses images satellites générées par IA alimentent la désinformation , H24info

2026-03-09
H24info
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images being used to spread false information about military events, which has already caused misinformation to millions of people. This misinformation can influence public opinion and decisions related to war, constituting harm to communities and security. The AI system's use is central to the creation and dissemination of these false images, fulfilling the criteria for an AI Incident. The harm is not hypothetical but ongoing, as the images have been widely viewed and shared, causing real-world impacts.
Thumbnail Image

Des images satellites truquées par l'IA alimentent la désinformation sur la guerre États-Unis-Iran : Actualités - Orange

2026-03-09
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated satellite images used to spread false information about military events, which have been widely viewed and shared, thus causing harm to communities by spreading disinformation during a war. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and potential security risks. The harm is realized, not just potential, as the images have already influenced public opinion and information environments. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fake AI satellite imagery flourishes in the fog of US-Iran war | | AW

2026-03-09
AW
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images being used as disinformation during wartime, which has already caused harm by misleading the public and influencing opinions and markets. The AI system's use in creating these manipulated images is central to the harm described. The harm is realized, not just potential, as the fake images have been widely disseminated and believed. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and disinformation in a conflict context.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
SpaceWar
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create manipulated satellite images that are actively spreading disinformation, which is causing harm to communities by misleading the public and potentially influencing geopolitical stability and security. The harm is realized as the fake images have already been widely disseminated and have influenced public opinion and possibly other domains such as financial markets. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and disinformation in a conflict context.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images being spread by state-aligned media and social media users, which have garnered millions of views and influenced public perception during a war. The AI system's use in creating these manipulated images has directly led to misinformation and disinformation, which is a form of harm to communities and can have real-world security implications. This meets the criteria for an AI Incident because the AI system's use has directly led to harm through misinformation in a conflict setting.
Thumbnail Image

Fake AI Satellite Images Fuel Disinformation in US-Iran War

2026-03-09
SUCH TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated satellite images being used to create and spread false information about military events, which has already gone viral and influenced public perception and decision-making. This constitutes harm to communities through misinformation and disinformation, fulfilling the criteria for an AI Incident. The AI system's use in generating these images is central to the harm described, and the harm is realized, not just potential.
Thumbnail Image

Fake AI satellite images fuel US-Iran war disinformation

2026-03-09
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite images being used by state-aligned media and propagandists to spread false narratives about military events. The disinformation has already spread widely, influencing public perception and potentially decision-making. This is a direct harm caused by the use of AI systems to generate misleading content, fitting the definition of an AI Incident due to harm to communities and security implications.
Thumbnail Image

Fake AI satellite imagery spurs US-Iran war disinformation

2026-03-09
NonStop Local Montana
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated satellite images used to spread false narratives about military events, which have garnered millions of views and influenced public opinion. The AI system's outputs have directly caused misinformation, a form of harm to communities and security. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through disinformation in a conflict context.
Thumbnail Image

Fake AI satellite images fuel US-Iran war disinformation

2026-03-09
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake satellite imagery being used to fabricate false evidence of war damage, which has been widely disseminated and believed by many. This constitutes a violation of rights to accurate information and causes harm to communities by fueling disinformation in a sensitive geopolitical conflict. The AI system's outputs have directly led to this harm, meeting the criteria for an AI Incident. The harm is realized, not just potential, as the disinformation is actively spreading and influencing perceptions.
Thumbnail Image

AI-Generated Disinformation Threatens to Worsen US-Iran Conflict

2026-03-09
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create fabricated satellite images that have been widely distributed and believed, causing misinformation and harm to communities by distorting perceptions of conflict. The AI system's outputs have directly contributed to violations of information integrity and potentially influenced real-world security and political outcomes. The harm is realized and ongoing, not merely potential, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-Generated Fake Satellite Image Highlights Disinformation Threat in Wartime

2026-03-09
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create fabricated satellite images that have been actively spread and believed by millions, causing misinformation and potential real-world security implications. This constitutes harm to communities and possibly national security, which aligns with the definition of an AI Incident. The AI system's development and use have directly led to this harm by enabling the creation and dissemination of convincing fake imagery. Therefore, the classification as an AI Incident is appropriate.