Unfounded AI prediction of global blackout in 2027

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Various AI tools have forecast a global power blackout on April 27, 2027, sparking widespread fear across social platforms. The speculative claim lacks scientific evidence, prompting energy experts and authorities to debunk the prediction and warn that AI-generated scenarios can fuel misinformation and public panic if taken at face value.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as it made a prediction about a future event. However, the prediction is speculative and lacks scientific support, and no actual harm or incident has occurred. The event plausibly raises concerns about potential future harm (a global blackout) but only as a hypothetical scenario without evidence. Therefore, this qualifies as an AI Hazard because the AI's prediction could plausibly lead to public fear or misinformation about a future incident, even though the event itself is not confirmed or realized.[AI generated]
AI principles
Transparency & explainabilitySafetyAccountabilityHuman wellbeingRobustness & digital security

Industries
Energy, raw materials, and utilitiesMedia, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Harm types
PsychologicalPublic interest

Severity
AI hazard

AI system task:
Forecasting/predictionContent generation


Articles about this incident or hazard

Thumbnail Image

Alerta global por un nuevo apagón eléctrico: la IA pone fecha exacta a cuándo habría un próximo corte de luz

2025-05-21
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it made a prediction about a future event. However, the prediction is speculative and lacks scientific support, and no actual harm or incident has occurred. The event plausibly raises concerns about potential future harm (a global blackout) but only as a hypothetical scenario without evidence. Therefore, this qualifies as an AI Hazard because the AI's prediction could plausibly lead to public fear or misinformation about a future incident, even though the event itself is not confirmed or realized.
Thumbnail Image

Alerta por un apagón eléctrico y masivo en todo el mundo del que nadie estaría a salvo: cuándo ocurriría

2025-05-20
Radio Mitre ✋🏽🧼🤚🏽
Why's our monitor labelling this an incident or hazard?
The AI system is involved in the development phase by analyzing data to predict a future event. No actual harm has occurred yet, but the AI's prediction indicates a credible risk of a large-scale blackout with significant consequences. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to critical infrastructure and communities. It is not an AI Incident since the harm is not realized, nor is it merely complementary information or unrelated news.
Thumbnail Image

La IA enciende todas las alarmas: Habrá un apagón eléctrico mundial en 2027

2025-05-20
Levante
Why's our monitor labelling this an incident or hazard?
The article involves an AI system generating a speculative prediction about a future event (a global blackout) that is not based on evidence and has not materialized. The AI's role is in producing a hypothetical scenario that has caused social alarm but no actual harm. Since no harm has occurred and the AI's prediction could plausibly lead to misinformation or fear in the future, this fits the definition of an AI Hazard rather than an Incident. It is not Complementary Information because the main focus is the AI's speculative prediction, not a response or update to a prior incident. It is not Unrelated because the AI system is central to the event described.
Thumbnail Image

¿Qué predijo la IA sobre un posible apagón global el 27 de abril de 2027?

2025-05-20
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The event involves AI systems only as the purported source of a prediction that is false and unsubstantiated. No actual harm has occurred or is imminent due to AI system malfunction or use. The article primarily addresses misinformation and public reaction, providing expert and official clarifications. Therefore, it does not describe an AI Incident or AI Hazard but rather provides complementary information about the societal context and responses to AI-related misinformation.