Israeli Government Uses AI to Promote Pro-Israel Narratives

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Israeli government signed a $6 million contract with U.S. firm Clock Tower to produce media content aimed at influencing AI models like ChatGPT to adopt pro-Israel narratives. The campaign targets Generation Z via social media and search engines, raising concerns about AI-driven dissemination of biased information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use and development of AI systems (ChatGPT and similar models) being intentionally manipulated to produce biased content, which directly relates to the AI system's use leading to harm in the form of misinformation and biased narratives. This constitutes a violation of rights and harm to communities as defined in the framework. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information, since the manipulation and its impact are actively occurring.[AI generated]
AI principles
AccountabilityFairnessTransparency & explainabilityDemocracy & human autonomyRobustness & digital securityRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

إسرائيل تدرب 'تشات جي بي تي' لتبنّي روايتها

2025-10-01
annahar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (ChatGPT and similar models) being intentionally manipulated to produce biased content, which directly relates to the AI system's use leading to harm in the form of misinformation and biased narratives. This constitutes a violation of rights and harm to communities as defined in the framework. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information, since the manipulation and its impact are actively occurring.
Thumbnail Image

إسرائيل تبرم عقدا بـ 6 ملايين دولار لإنتاج محتوى يؤيّد روايتها على مختلف منصّات التواصل

2025-09-30
JawharaFM (Jawhara FM)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., AI models like ChatGPT and AI-driven search engine optimization platforms) to influence narratives and public opinion. However, the article does not report any direct or indirect realized harm such as injury, rights violations, or disruption caused by this AI use. Instead, it describes a strategic use of AI tools to shape information and influence audiences, which could plausibly lead to harms like misinformation or biased information dissemination in the future. Since no actual harm is reported yet, but there is a credible risk of future harm, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential impact and strategic use of AI for influence, not on responses or updates to past incidents.
Thumbnail Image

الحكومة الإسرائيلية تبرم عقدًا بقيمة 6 ملايين دولار مع شركة أميركية بهدف التأثير على نماذج الذكاء الاصطناعي | ليبيا أوبزيرفر

2025-10-01
ar.libyaobserver.ly
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems (e.g., ChatGPT and other AI models) is explicit, as the contract aims to influence these models to produce biased content. The use of AI-generated content to sway public opinion constitutes indirect harm to communities by spreading biased narratives, which fits the definition of an AI Incident. The harm is realized as the content is intended for distribution and influence, not merely a potential risk. Therefore, this event qualifies as an AI Incident due to the direct use of AI systems to cause harm through biased information dissemination.
Thumbnail Image

صوت الحق - الاحتلال يدرب شات جي بي تي لتبني رواياته

2025-09-30
وكالة صوت الحق الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI systems like ChatGPT are being trained or influenced to adopt biased narratives, which is a direct use of AI development and use leading to harm. The harm here is the violation of rights and harm to communities through biased information dissemination. The involvement of AI in producing or shaping content that promotes a particular political agenda at scale fits the definition of an AI Incident. The harm is ongoing and intentional, not merely potential, as the content is actively produced and disseminated to influence AI outputs and public perception.
Thumbnail Image

موقع أميركي: إسرائيل تدرب شات جي بي تي لتبني رواياتها

2025-09-30
الجزيرة نت
Why's our monitor labelling this an incident or hazard?
The event involves the use and manipulation of AI systems (e.g., ChatGPT and AI-driven search ranking tools) to propagate biased narratives. Although the article does not report a realized harm, the deliberate effort to influence AI models to adopt partial narratives constitutes a credible risk of harm to communities through misinformation and manipulation. This fits the definition of an AI Hazard, as the development and use of AI systems in this manner could plausibly lead to violations of rights or harm to communities. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it highlights a specific AI-related activity with potential for harm.
Thumbnail Image

"Responsible Statecraft": "إسرائيل" تسعى إلى تجنيد "ChatGPT"

2025-10-02
شبكة الميادين
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems such as ChatGPT and predictive AI tools to create and optimize content for propaganda purposes. While this involves AI system use, there is no indication that this use has directly or indirectly caused harm (such as misinformation causing societal harm, violation of rights, or other harms). The event describes ongoing strategic use of AI in media influence, which is a significant development in the AI ecosystem and governance context but does not meet the threshold for an AI Incident or AI Hazard. Hence, it fits the definition of Complementary Information, as it enhances understanding of AI's societal impact and strategic applications without reporting a specific harm or credible risk of harm.
Thumbnail Image

"إسرائيل" تستعين بشركةٍ أمريكية لتدريب "شات جي بي تي" على روايةٍ داعمة.. هذه تفاصيل الصفقة

2025-10-02
فلسطين أون لاين
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT and AI-driven search ranking tools) is explicitly involved in the development and use phases to propagate a biased narrative. The use of AI to influence public opinion and manipulate search results can be seen as causing harm to communities by spreading biased or misleading information, which fits within the harm category of harm to communities. Since the event describes an active campaign using AI systems to influence narratives and public opinion, this constitutes an AI Incident due to the realized harm of biased information dissemination and manipulation of public discourse.