AI-Driven Disinformation Campaign Promotes Reza Pahlavi on Social Media

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Investigations by Citizen Lab and Haaretz reveal that Israeli-backed networks used AI-generated deepfake videos, fake accounts, and AI-created profile images to promote Reza Pahlavi and monarchist narratives in Persian-language social media. The campaign, funded by Israel, aimed to manipulate public opinion and destabilize Iran through coordinated misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems explicitly for generating fake social media content, including videos and synthetic faces, as part of a psychological warfare campaign. The AI's role is pivotal in producing and amplifying disinformation that harms the social fabric and political stability of Iran. The harm is realized and ongoing, as the campaign actively manipulates public opinion and spreads false narratives. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights through disinformation and manipulation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomyRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

منجی‌سازی از دیکتاتوری با هوش مصنوعی/ اسرائیل چطور کارزار تبلیغاتی علیه ایران را با برجسته کردن رضا پهلوی پیش می‌برد؟

2025-10-05
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly for generating fake social media content, including videos and synthetic faces, as part of a psychological warfare campaign. The AI's role is pivotal in producing and amplifying disinformation that harms the social fabric and political stability of Iran. The harm is realized and ongoing, as the campaign actively manipulates public opinion and spreads false narratives. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights through disinformation and manipulation.
Thumbnail Image

افشاگری بزرگ هاآرتص؛ کارزار آنلاین اسرائیلی در جنگ با ایران دعوت به آشوب و احیای سلطنت می‌کرد

2025-10-03
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos and AI-created fake social media profiles as part of a coordinated influence operation. This operation has directly led to harm by spreading false information, inciting protests, and attempting to destabilize the Iranian government, which constitutes harm to communities and potentially to property and public order. The AI systems' role is pivotal in generating and amplifying the disinformation, making this an AI Incident rather than a mere hazard or complementary information. The harm is realized and ongoing, not just potential.
Thumbnail Image

همکاری وطن‌فروشان با اسرائیل برای فریب افکار عمومی ایران - تسنیم

2025-10-04
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology, AI tools for content generation and dissemination) in a malicious disinformation campaign that has already caused harm by spreading false narratives and inciting unrest in Iran. This constitutes harm to communities and a violation of rights through manipulation and misinformation. Therefore, it qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

همکاری وطن‌فروشان با اسرائیل برای فریب افکار عمومی ایران

2025-10-04
فردانیوز
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology, AI tools for content generation) in a malicious campaign that has directly led to harm to communities by spreading false narratives, inciting unrest, and attempting to destabilize Iran. The presence of AI is explicit in the use of AI tools for content creation and dissemination. The harm is realized and ongoing, as the campaign has influenced social media discourse and public opinion, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

شاه مصنوعی با سرور صهیونیستی

2025-10-05
فردانیوز
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI tools to produce messages, videos, and fake profiles that simulate grassroots support for Reza Pahlavi, which is actually orchestrated by Israeli government entities. This AI-driven misinformation campaign manipulates public opinion, spreads false narratives, and aims to destabilize Iranian society, which is a clear harm to communities and a violation of rights. The AI system's development and use are central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

سلطنت صهیونی؛ افشای عملیات گسترده اسرائیل علیه ایران توسط هاآرتص | پشت پرده کمپین و هدف اصلی پروژه

2025-10-05
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The report explicitly describes the use of AI systems (e.g., AI-generated videos, fake accounts managed with AI tools) by Israeli intelligence to conduct a disinformation campaign targeting Iran. The campaign's purpose is to destabilize Iran, promote division, and support military aggression, which directly harms communities and violates rights. The AI system's use is central to the operation's success, making it a direct cause of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

آغاز شد؛ عملیات آنلاین اسرائیل برای توجیه حملات علیه ایران | همکاری وطن‌فروشان با اسرائیل برای فریب افکار عمومی ایران

2025-10-04
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology, AI-generated content) in an active disinformation campaign that has already been deployed and is influencing public opinion and social stability in Iran. This constitutes an AI Incident because the AI system's use has directly led to harm to communities (disinformation, social unrest) and violations of rights (manipulation of public discourse). The article details realized harm rather than just potential risk, and the AI system's role is pivotal in the operation's execution and impact.
Thumbnail Image

گزارش‌های تحقیقی از کارزار اسرائیل برای تبلیغ شاهزاده پهلوی و علیه جمهوری اسلامی

2025-10-04
رادیو فردا
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and operate fake social media accounts and generate content for a coordinated influence campaign. This use of AI has directly led to harm to communities by spreading misinformation and manipulating public opinion, which fits the definition of an AI Incident under harm category (d) - harm to communities. The article describes realized harm through active dissemination and influence, not just potential harm, and the AI system's role is pivotal in enabling the scale and coordination of the campaign. Therefore, this is classified as an AI Incident.
Thumbnail Image

گزارش بی‌بی‌سی فارسی از حمایت‌های اسرائیل از رضا پهلوی با ساختن اکانت‌های جعلی

2025-10-05
بالاترین
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fake accounts for political promotion constitutes an AI system's use leading to harm to communities through misinformation and manipulation. The involvement of AI in creating fake accounts and the active campaign indicates direct use of AI systems causing harm. Therefore, this qualifies as an AI Incident under the harm category of harm to communities.
Thumbnail Image

همکاری وطن‌فروشان با اسرائیل برای فریب افکار عمومی ایران(خبر ویژه)

2025-10-03
kayhan.ir
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI tools (deepfake, AI-generated content) in a large-scale disinformation operation involving fake social media accounts. This operation has already influenced public opinion and social stability in Iran, including calls for unrest and misinformation about events such as the attack on Evin prison. The AI systems' use in generating and spreading false narratives directly contributes to harm to communities and societal disruption, fitting the definition of an AI Incident. The involvement is in the use of AI systems for malicious purposes, causing realized harm rather than just potential harm.
Thumbnail Image

افشاگری بزرگ هاآرتص: اسرائیل با استفاده از هوش مصنوعی و ایجاد شبکه‌ای از حساب‌های اینترنتی جعلی در پشت‌صحنه کارزار‌های حمایت از رضا پهلوی قرار دارد

2025-10-03
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake profile images and the operation of a network of fake accounts spreading misleading content. This AI-enabled misinformation campaign is directly linked to harm to communities by manipulating public discourse and potentially influencing political events. The involvement of AI in generating fake content and coordinating the campaign meets the definition of an AI system causing direct harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

کارزاری با حمایت اسراییل برای تبلیغ رضا پهلوی در شبکه‌های اجتماعی

2025-10-04
IranWire | خانه
Why's our monitor labelling this an incident or hazard?
The report explicitly states the use of AI and fake accounts to promote a political figure, which implies AI system involvement in generating or managing these accounts. The campaign's active nature and its role in influencing public opinion or political narratives can harm communities by spreading misinformation or manipulating discourse. Therefore, the AI system's use has directly led to harm to communities, fitting the definition of an AI Incident.
Thumbnail Image

افشای اقدامات اسرائیل برای رضا پهلوی

2025-10-04
noandish.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake profile pictures and AI-produced videos used in a coordinated disinformation campaign. The campaign has directly led to harm by manipulating public opinion, spreading false information, and coordinating political agitation, which constitutes harm to communities. The AI system's development and use are integral to the incident. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to significant harm to communities through misinformation and political manipulation.
Thumbnail Image

عملیات تأثیرگذاری با جعل و هوش مصنوعی برای مطرح کردن بچه شاه و تبلیغ برای سلطنت

2025-10-04
mojahedin.org
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos and AI-generated social media content. The use of fake accounts and AI-generated content to manipulate public opinion and promote a political agenda constitutes a violation of rights related to truthful information and harms communities by spreading misinformation and propaganda. The harm is realized as the campaign is active and influencing discourse, meeting the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to the direct harm caused by AI-enabled misinformation and manipulation.
Thumbnail Image

سلطنت صهیونی

2025-10-05
مشرق نیوز
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (AI-generated videos, fake accounts managed by AI) by the Israeli regime to conduct a disinformation campaign aimed at destabilizing Iran and promoting separatism. This campaign has already caused harm by spreading false narratives, inciting unrest, and undermining national unity, which constitutes harm to communities and political rights. The AI system's development and use are directly linked to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

عملیات روانی یا راهبرد؟/ پشت پرده افشای عملیات سایبری با اکانت های فیک ضد ایرانی

2025-10-06
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated fake accounts, deepfake videos) in a coordinated cyber operation that has directly led to harm to communities through misinformation and psychological manipulation. The AI systems were used to produce and disseminate false content and fake personas, which is a clear example of AI-enabled harm. The article details actual operations and their impacts, not just potential risks, so it is an AI Incident rather than a hazard or complementary information. The harm is to social cohesion and public trust, fitting the harm to communities category.
Thumbnail Image

سلطنت صهیونی - تسنیم

2025-10-05
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (including AI-generated deepfake videos and AI-driven fake social media accounts) by the Israeli regime to conduct a disinformation campaign aimed at destabilizing Iran and supporting hostile military actions. This campaign has directly led to harm by undermining national unity, inciting unrest, and supporting violent conflict, which qualifies as harm to communities and violation of rights. The AI system's development and use in this context are central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

میلیون‌ها پروفایل جعلی در ایکس و اینستاگرام برای حمایت از رضا پهلوی | ویدئو

2025-10-05
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The use of AI-generated fake profiles and coordinated cyber activities to create a false impression of popular support constitutes a violation of rights and causes harm to communities by spreading misinformation and manipulating public discourse. Since the event involves the active use of AI systems to produce and manage fake accounts that directly lead to misinformation and social harm, it qualifies as an AI Incident under the framework.
Thumbnail Image

افشاگری هاآرتص از پروژه نتانیاهو و پهلوی/ اکانت‌های اسرائیلی سلطنت طلب

2025-10-05
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to generate and manage fake social media accounts spreading political propaganda. This use of AI has directly led to misinformation and manipulation of public discourse, which harms communities and violates rights related to truthful information and political expression. The involvement of AI in producing and organizing the content is clear, and the harm is realized, not just potential. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

گزارش کامل هاآرتص از کمپین اسراییل برای حمایت از شاهزاده رضا پهلوی

2025-10-05
IranWire | خانه
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos and AI tools to create fake social media accounts and content as part of a coordinated campaign funded by the Israeli government. The campaign's outputs have been disseminated widely, including false videos timed with military operations, which have been used to manipulate public perception and incite unrest. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and political manipulation. The involvement of AI in generating and spreading false content is clear and central to the event. The harm is realized, not just potential, as the misinformation has influenced public discourse and actions.
Thumbnail Image

پهلوی سوم؛ پیر کودک کوکی

2025-10-06
خبرگزاری باشگاه خبرنگاران | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated content, AI tools for message creation and dissemination) in a malicious campaign that has directly led to misinformation and political manipulation, which harms communities and violates political rights. The article details how AI was used to create fake accounts and spread false narratives during a politically sensitive event (attack on a prison), contributing to an attempted coup. This meets the criteria for an AI Incident because the AI system's use directly led to harm in the form of misinformation and political destabilization.
Thumbnail Image

زن اسرائیلی که پشت رضا پهلوی است، کیست؟

2025-10-06
tabnak.ir
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated videos, fake accounts, coordinated messaging) to spread disinformation, which is a direct use of AI leading to harm to communities by manipulating public opinion and political discourse. The article describes realized harm through the active dissemination of AI-generated fake content and coordinated campaigns, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ماجرای رسوایی تازه برای نتانیاهو و رضا پهلوی | پشت پرده یک جبهه از جنگ سایبری علیه ایران

2025-10-06
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., AI-generated fake accounts and content) in a cyber operation that has directly led to harm by spreading misinformation, undermining social cohesion, and manipulating public opinion within Iran. These actions fulfill the criteria for an AI Incident as they have caused violations of rights and harm to communities through the AI system's use. The article describes realized harm rather than potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

افشاگری که نفس رضا پهلوی را برید !!

2025-10-06
بالاترین
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI technology used to operate fake accounts and influence public opinion) in a manner that manipulates information and potentially harms communities by spreading propaganda or misinformation. This constitutes a violation of rights and harm to communities through manipulation of public opinion. Since the AI system's use has directly led to this harm, it qualifies as an AI Incident.
Thumbnail Image

زن اسرائیلی که پشت رضا پهلوی است، کیست؟

2025-10-06
جامعه خبری تحلیلی الف
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated videos, coordinated AI-driven messaging) in a malicious campaign to spread misinformation and manipulate public opinion, which is a clear harm to communities and political rights. The AI system's use is central to the incident, as it enables the creation and dissemination of fake content and fake accounts. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-enabled disinformation campaigns.
Thumbnail Image

ابعاد گسترده کارزار دروغ‌پراکنی رژیم صهیونیستی علیه ایران؛ از تبلیغ برای رضا پهلوی با حساب‌های کاربری جعلی تا شبکه "فرار از زندان" در جنگ ۱۲ روزه

2025-10-06
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to generate fake content and coordinate social media accounts to spread false narratives and propaganda. The AI-generated deepfake video of the prison bombing was disseminated during an active military attack, misleading media and the public, which is a direct harm to communities and a violation of informational integrity. The involvement of AI in producing and distributing this content is central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to communities and societal trust.
Thumbnail Image

شاهزاده فیک با لشکر فیک‌نیوزها

2025-10-07
قدس آنلاین | پایگاه خبری - تحلیلی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies such as deepfakes and AI-generated avatars in a disinformation campaign that has already been deployed to influence public opinion and incite unrest. This constitutes realized harm to communities through misinformation and manipulation, fitting the definition of an AI Incident. The AI system's role is pivotal in producing and spreading fake content, leading to social harm.
Thumbnail Image

"پژوهش" در فضای مجازی برای روایت‌سازی و جعل واقعیت؛ با پشتیبانان واقعی چه می‌کنید؟!

2025-10-06
kayhan.london
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (deepfake videos) and coordinated online campaigns using AI, which fits the definition of an AI system's involvement. The event involves the use of AI in a manner that could plausibly lead to harm by spreading misinformation and manipulating public opinion, which can harm communities and violate rights. However, the article primarily critiques the validity of these claims and does not confirm actual realized harm caused by the AI system. Hence, it does not meet the threshold for an AI Incident but does represent a credible AI Hazard due to the plausible risk of harm from such AI-enabled influence operations.
Thumbnail Image

زن اسرائیلی که پشت رضا پهلوی است | عملیات مخفی موساد برای تاج‌گذاری پهلوی | +عکس

2025-10-07
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated videos, fake accounts, coordinated messaging) in a deliberate campaign to influence public opinion and political outcomes. This use of AI has directly led to harm in the form of misinformation, manipulation of political discourse, and potential violation of rights related to truthful information and political expression. Therefore, it qualifies as an AI Incident due to realized harm caused by AI-enabled misinformation and manipulation.
Thumbnail Image

رضا پهلوی چه سودی برای اسرائیل دارد؟ | جهان نيوز

2025-10-07
جهان نيوز
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake social media accounts for a political campaign, which constitutes the use of AI systems to spread misinformation or manipulate public discourse. This activity can cause harm to communities by distorting information and influencing political processes. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

اسرائیل کی فنڈنگ سے ایران میں بادشاہت کی بحالی کی آن لائن مہم کا انکشاف

2025-10-05
ایکسپریس اردو
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create fake images and videos as part of a coordinated misinformation campaign. This campaign has directly caused harm by misleading people, spreading false information about events like explosions, and inciting political unrest, which fits the definition of an AI Incident due to harm to communities and violation of rights. Therefore, it is classified as an AI Incident.
Thumbnail Image

اسرائیلی فنڈنگ سے ایران میں بادشاہت کی بحالی کے حق میں مہم چلائے جانے کا انکشاف

2025-10-04
DawnNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating fake social media identities, AI-generated videos, and automated content creation to conduct a disinformation campaign. This campaign has directly led to harm by spreading false information, manipulating communities, and potentially inciting unrest, which qualifies as harm to communities and a violation of rights. The AI system's use in this context is pivotal to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.
Thumbnail Image

اسرائیلی فنڈنگ سے ایران میں بادشاہت بحالی کی آنلائن مہم کا انکشاف

2025-10-06
Daily Pakistan
Why's our monitor labelling this an incident or hazard?
The use of AI-generated content and fake accounts to spread false information and incite protests constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI system's use in generating misleading videos and images directly contributed to the harm by deceiving the public and influencing political opinions, which is a violation of rights and harms societal stability. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ایران میں سابق شاہ کے بیٹے کو 'مقبول بنانے' میں اسرائیل ملوث نکلا - Ummat News

2025-10-04
Ummat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content and AI tools to create fake social media accounts and videos as part of a disinformation campaign. This campaign has directly led to harm by spreading false narratives and destabilizing political environments, which qualifies as harm to communities. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm through misinformation and political manipulation.
Thumbnail Image

اسرائیل کی پشت پناہی میں ایران میں بادشاہت کی آن لائن مہم بے نقاب - Ummat News

2025-10-05
Ummat News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create fake images and videos as part of a disinformation campaign. This campaign has directly caused harm by misleading people, spreading false information about events like explosions, and inciting public anger, which harms communities and violates rights. The AI system's role is pivotal in generating realistic fake content that facilitates this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

اسرائیل کی فنڈنگ سے ایران میں بادشاہت کی بحالی کی آن لائن مہم کا انکشاف

2025-10-06
Nawaiwaqt
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create fake images and videos as part of a disinformation campaign. This campaign has directly caused harm by misleading people and fomenting political unrest, which fits the definition of an AI Incident due to harm to communities and violation of rights. The AI system's role is pivotal in generating the deceptive content and enabling the scale of misinformation spread. Therefore, this is classified as an AI Incident.
Thumbnail Image

اسرائیلی فنڈنگ سے ایران میں بادشاہت کی بحالی کی آن لائن مہم بے نقاب

2025-10-05
Urdu News - Today News - Daily Jasarat News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating misleading content (images and videos) used in a disinformation campaign that has directly led to harm by manipulating public opinion and potentially inciting unrest. This fits the definition of an AI Incident because the AI system's use has directly contributed to harm to communities and violations of rights through misinformation and manipulation.
Thumbnail Image

اسرائیلی فنڈنگ سے ایران میں بادشاہت کی بحالی کے حق میں مہم چلائے جانے کا انکشاف

2025-10-04
Dawn News Urdu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fake social media accounts, AI-generated videos, and content to manipulate public opinion and spread misinformation. The use of AI in this coordinated campaign has directly led to harm to communities by spreading false narratives, inciting unrest, and undermining social stability. The campaign's activities include AI-generated deepfake videos and fake news dissemination, which are clear harms caused by AI misuse. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-enabled misinformation and manipulation.
Thumbnail Image

"پریزن‌برک"؛ گزارش سیتیزن‌لب از کارزاری مجازی با هدف "براندازی" حکومت ایران - BBC News فارسی

2025-10-05
BBC
Why's our monitor labelling this an incident or hazard?
The report identifies an organized campaign employing AI tools to influence and destabilize a government, which constitutes harm to communities and political order. The AI system's use in this context is a direct factor in the harm caused by misinformation or manipulation, fitting the definition of an AI Incident due to realized harm through coordinated AI-driven influence operations.
Thumbnail Image

پروژه مشترک تل‌آویو و رضا پهلوی برای بی‌ثبات‌سازی ایران با فیک‌نیوز و هوش مصنوعی! + فیلم

2025-10-05
خبرگزاری باشگاه خبرنگاران | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and video forgery by accounts to create and spread false narratives against the Iranian government, which is a direct use of AI systems leading to harm to communities through misinformation and political destabilization. This fits the definition of an AI Incident as the AI system's use has directly led to harm in the form of social and political disruption.
Thumbnail Image

پروژه مشترک تل‌آویو و رضا پهلوی برای بی‌ثبات‌سازی ایران + ویدئو

2025-10-05
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The accounts mentioned are likely controlled or generated by AI systems or automated bots designed to spread disinformation, which can harm communities by destabilizing social or political environments. Since the accounts have been active and are actively spreading targeted narratives, this constitutes realized harm to communities through misinformation. Therefore, this qualifies as an AI Incident due to the direct role of AI-driven or automated systems in causing harm through disinformation campaigns.
Thumbnail Image

پشت‌پرده صهیونیستی یک شبکه جعل در فضای مجازی

2025-10-05
kayhan.ir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create and manipulate videos and images as part of a coordinated campaign to spread false narratives. This misinformation campaign has already occurred and has influenced public opinion during a real conflict, which qualifies as harm to communities. The AI system's involvement in generating fabricated content that was widely shared and believed makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

پروژه مشترک تل‌آویو و رضا پهلوی برای بی‌ثبات‌سازی ایران!

2025-10-05
مشرق نیوز
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to generate fake videos and spread false narratives, which are active actions causing harm to the community by destabilizing a country. The involvement of AI in producing and amplifying misinformation that leads to social harm fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities (harm category d).