Zelensky Warns Europe of AI-Enabled Drone Threats

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ukrainian President Volodymyr Zelensky, speaking in London, warned European nations about the rising threat of AI-powered drones. He highlighted that such drones, used by Russia and Iran against Ukraine's critical infrastructure, are now affordable for non-state actors, increasing the risk of mass attacks across Europe.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article focuses on the potential dangers of AI-enabled drones and the evolving military threat they represent, which could plausibly lead to AI incidents involving harm to people and communities. Since no specific harm or incident has yet occurred as described, but the risk is credible and recognized by leaders, this qualifies as an AI Hazard. The mention of AI in drones and the defense partnership to counter them supports the presence of AI systems and the plausible future harm they could cause.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Mobility and autonomous vehiclesGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Public interest

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Zelenski, discurs manifest la Londra despre pericolul dronelor pentru Europa. "Nu numai un bogat nebun ca Putin își mai poate permite asta" - HotNews.ro

2026-03-17
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential dangers of AI-enabled drones and the evolving military threat they represent, which could plausibly lead to AI incidents involving harm to people and communities. Since no specific harm or incident has yet occurred as described, but the risk is credible and recognized by leaders, this qualifies as an AI Hazard. The mention of AI in drones and the defense partnership to counter them supports the presence of AI systems and the plausible future harm they could cause.
Thumbnail Image

Umbra dronelor planează asupra Europei. Zelenski avertizează că țările trebuie să se pregătească pentru atacuri din partea unor actori non-statali

2026-03-18
Ziare.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references drones that use AI technology for military attacks, including on critical infrastructure, which fits the definition of an AI system. The warnings about non-state actors potentially launching such attacks in Europe indicate a credible risk of harm (to infrastructure and communities) that could plausibly occur in the future. Since no actual harm or attack in Europe is reported, but the risk is clearly articulated and linked to AI-enabled drone technology, the event is best classified as an AI Hazard.
Thumbnail Image

Avertismentul lui Zelenski: Nu doar "nebunii bogați" ca Putin își mai permit atacuri în masă

2026-03-18
Cotidianul RO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered drones being used in attacks on critical infrastructure, which constitutes harm to communities and property, fulfilling the criteria for an AI Incident. The involvement of AI in the drones' operation is clear, and the harm is ongoing and direct. Additionally, the warning about future attacks by non-state actors using similar AI technology supports the assessment of current incidents and plausible future harms. Therefore, the event qualifies as an AI Incident due to realized harm caused by AI systems in military drones.
Thumbnail Image

Zelenski avertizează Europa - dronele nu mai costă miliarde

2026-03-17
Financiarul.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions drones integrating AI and becoming more lethal, which fits the definition of an AI system. The warning about potential attacks from various actors using these AI-enabled drones indicates a credible risk of harm to people, communities, and critical infrastructure. Since the article focuses on the plausible future threat rather than describing a realized harm or incident, this qualifies as an AI Hazard. The presence of AI in drones and the potential for their malicious use to cause harm aligns with the definition of an AI Hazard as a plausible future risk of an AI Incident.