AI Models Pressured to Predict US Strike Date on Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Jerusalem Post tested four major AI language models by asking them to predict the exact date of a potential US military strike on Iran. Initially refusing to provide a date, some models eventually offered speculative timelines under repeated prompting, highlighting risks of AI-generated misinformation in sensitive geopolitical contexts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models) generating predictions about a sensitive geopolitical event. While the AI systems are used to speculate on a potential military strike date, no actual harm or incident has occurred. The AI involvement is in the use phase, producing outputs that are speculative and hypothetical. Since no harm has materialized, but the AI-generated content could plausibly lead to misinformation or escalation if misinterpreted or misused, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses, governance, or updates, so it is not Complementary Information. It is not unrelated because AI systems are central to the event described.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI hazard

AI system task:
Content generationForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

4 نماذج ذكاء اصطناعي تتوقع موعد الضربة الأميركية على إيران

2026-02-25
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) generating predictions about a sensitive geopolitical event. While the AI systems are used to speculate on a potential military strike date, no actual harm or incident has occurred. The AI involvement is in the use phase, producing outputs that are speculative and hypothetical. Since no harm has materialized, but the AI-generated content could plausibly lead to misinformation or escalation if misinterpreted or misused, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses, governance, or updates, so it is not Complementary Information. It is not unrelated because AI systems are central to the event described.
Thumbnail Image

الذكاء الاصطناعي يتوقع موعد الضربة الأميركية على إيران - شفق نيوز

2026-02-25
شفق نيوز
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used to generate predictions about a sensitive geopolitical event. However, the AI outputs are speculative and do not constitute an actual incident causing harm. There is no indication that the AI predictions have led to injury, disruption, rights violations, or other harms. The article focuses on the AI systems' behavior under pressure in a controlled test rather than any real-world consequences. Thus, it fits the definition of an AI Hazard, as the AI's use could plausibly lead to misinformation or influence perceptions about a serious conflict, but no harm has yet materialized.
Thumbnail Image

توقعات تحدد موعد الضربة الأمريكية على إيران.. متى تكون؟

2026-02-25
almashhad.news
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (language models) generating speculative responses to a sensitive political question, but no real-world harm or incident has resulted from their use. The AI outputs are hypothetical and do not constitute an AI Incident or Hazard. The main focus is on understanding AI behavior and limitations, which aligns with Complementary Information as it provides context and insight into AI capabilities and challenges without describing an actual or plausible harm event.
Thumbnail Image

توقعات تحدد موعد الضربة الأمريكية على إيران.. متى تكون؟

2026-02-26
اخبار اليمن الان
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (various language models) used to generate speculative predictions about a sensitive geopolitical event. However, the AI outputs did not cause any direct or indirect harm, nor is there any indication of plausible future harm stemming from these AI-generated predictions. The focus is on understanding AI behavior and limitations, which fits the definition of Complementary Information. There is no report of an AI Incident (harm caused) or AI Hazard (plausible future harm). The event is not unrelated since AI systems are central to the article, but it does not meet the threshold for Incident or Hazard.