AI-Generated Deepfake Videos Spread Misinformation During Iran Protests

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos depicting protests in Iran have circulated widely online, amassing millions of views. Both pro- and anti-government actors used AI video generators to create hyper-realistic but false content, filling an information void caused by internet restrictions and spreading misinformation that could escalate social tensions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the use of AI-generated deepfake videos. The use of these AI systems has directly led to harm in the form of misinformation and distortion of facts about significant political protests, which can harm communities by undermining trust, spreading false narratives, and potentially escalating tensions. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation and social disruption.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI-created Iran protest videos gain traction

2026-01-15
today.rtl.lu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated deepfake videos. The use of these AI systems has directly led to harm in the form of misinformation and distortion of facts about significant political protests, which can harm communities by undermining trust, spreading false narratives, and potentially escalating tensions. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation and social disruption.
Thumbnail Image

AI-created Iran protest videos gain traction

2026-01-15
KTBS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely AI-generated deepfake videos. The use of these AI systems to create and spread false visual content about protests directly leads to harm by misleading the public and distorting information during a sensitive political crisis. This misinformation can exacerbate social tensions and undermine human rights such as the right to truthful information. Therefore, this qualifies as an AI Incident due to realized harm to communities through misinformation caused by AI-generated content.
Thumbnail Image

AI-created Iran protest videos gain traction

2026-01-15
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear as the videos are AI-generated deepfakes. The use of these AI-generated videos to spread misinformation about protests constitutes harm to communities by misleading the public and potentially escalating social tensions. Since the videos are actively spreading and have amassed millions of views, the harm is occurring, making this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

AI-generated Iran protest videos spread amid internet blackout

2026-01-15
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) being used to spread false narratives about protests in Iran, which have amassed millions of views. This misinformation can harm communities by creating confusion, undermining trust in information, and potentially escalating tensions. The AI systems' role in fabricating and distributing these videos is central to the harm described, meeting the criteria for an AI Incident involving harm to communities.
Thumbnail Image

Fake AI videos of Iran protests 'fill void' left by internet shutdown | The National

2026-01-15
The National
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI video generators) used to create and disseminate false video content about a sensitive political situation. This use of AI has directly led to harm to communities by spreading misinformation and potentially exacerbating social unrest. The harm is realized as the videos have been viewed millions of times and influence public opinion. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing harm to communities through misinformation.
Thumbnail Image

Iran Protests Viral Video Of 'Trump Street' Is Totally Fake

2026-01-15
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos that are actively spreading misinformation about the Iran protests. The misinformation is causing harm to communities by distorting the truth and potentially influencing public perception and social stability. The AI system's development and use in creating these videos directly lead to this harm. The article confirms the videos are AI-generated and have been widely disseminated, indicating realized harm rather than just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des vidéos générées par l'IA inondent la toile

2026-01-15
Le Journal de Montréal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated video content that misrepresents real-world events, leading to misinformation and social harm. The AI's role is pivotal in creating and spreading these false narratives, which directly harm communities by distorting facts and potentially escalating tensions. Since the harm (misinformation and its societal impact) is occurring and linked directly to the use of AI-generated content, this qualifies as an AI Incident under the framework.
Thumbnail Image

Rues rebaptisées " Trump Street " : des vidéos des manifestations iraniennes générées par l'IA inondent les réseaux sociaux

2026-01-15
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions videos generated by AI that falsely depict protests and counter-protests in Iran, which have been widely viewed and shared on social media. The AI system's outputs are directly causing harm by spreading misinformation, which is a form of harm to communities. The harm is realized, not just potential, as the videos have already accumulated millions of views and influence public perception. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and social disruption.
Thumbnail Image

Faute d'images réelles, des vidéos de manifestations en Iran générées par l'IA inondent la toile

2026-01-15
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated video content that is being used to misinform and manipulate public understanding of significant political events. This constitutes a violation of rights related to access to truthful information and causes harm to communities through misinformation. Since the AI-generated content is actively spreading and influencing public discourse, the harm is realized, not just potential. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated misinformation causing harm to communities.
Thumbnail Image

Des vidéos de manifestations en Iran générées par l'IA inondent la toile

2026-01-15
La Libre.be
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated visual content ('hallucinatory' videos) that misrepresent real-world events, which is a direct use of AI leading to harm by spreading misinformation. This misinformation can disrupt social cohesion and distort public perception, harming communities. The AI-generated videos are actively disseminated and have significant reach (e.g., 720,000 views), indicating realized harm rather than just potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through misinformation and manipulation of public discourse.
Thumbnail Image

En Iran, coupé d'internet, des images générées par l'IA prétendent montrer les manifestations

2026-01-15
TV5MONDE - Informations
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated visual content that misrepresents real-world events, which directly leads to harm by spreading misinformation and manipulating public opinion during a critical political crisis. This constitutes harm to communities and the information environment, fitting the definition of an AI Incident. The AI-generated videos have already been viewed millions of times, indicating realized harm rather than just a potential risk. Therefore, this is classified as an AI Incident due to the direct role of AI-generated content in causing informational harm and social disruption.
Thumbnail Image

"Promouvoir leurs propres récits sur le chaos": profitant que la réalité est prisonnière d'un Iran coupé d'Internet, de fausses vidéos générées par IA enflamment le net

2026-01-15
BFM BUSINESS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos that are disseminated widely on social media, directly causing harm by spreading misinformation and exacerbating social tensions during a politically sensitive period. The harm to communities through disinformation and manipulation is clearly articulated and ongoing. The AI's role is pivotal as the videos are AI-generated and are central to the misinformation campaign. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.