Iran unveils 'Gaza' ultra-heavy AI-guided drone with 4,000 km range

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Iran's Revolutionary Guard revealed the 'Gaza', an ultra-heavy drone with a 22 m wingspan, 35h endurance, 4 000 km range and capacity for thirteen bombs (500 kg payload). Unveiled during 'Great Prophet' Gulf drills, it flies at 350 km/h and is complemented by AI-guided Qaem and Almas missiles tested on Mohajer-6 and Ababil-5 drones.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions missiles equipped with AI and a new drone with autonomous capabilities, indicating AI system involvement. Although no direct harm or incident is reported, the development and deployment of AI-enabled military drones and missiles with lethal capacity plausibly could lead to significant harm, including injury or violations of human rights. The event is thus best classified as an AI Hazard, reflecting the credible risk posed by these AI systems in a military context. There is no indication of an actual AI Incident or complementary information focus, and the event is clearly related to AI systems, so it is not unrelated.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareMobility and autonomous vehiclesDigital security

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Physical (injury)Public interestHuman or fundamental rightsEconomic/PropertyPsychological

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Nueva amenaza de Irán: la Guardia Revolucionaria desplegó un drone "ultra pesado" con capacidad para portar hasta 13 bombas

2025-01-27
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions missiles equipped with AI and a new drone with autonomous capabilities, indicating AI system involvement. Although no direct harm or incident is reported, the development and deployment of AI-enabled military drones and missiles with lethal capacity plausibly could lead to significant harm, including injury or violations of human rights. The event is thus best classified as an AI Hazard, reflecting the credible risk posed by these AI systems in a military context. There is no indication of an actual AI Incident or complementary information focus, and the event is clearly related to AI systems, so it is not unrelated.
Thumbnail Image

Irán presenta un dron ultra pesado con 13 bombas y misiles guiados con inteligencia artificial

2025-01-27
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in missiles and drones designed for combat and targeting, which are military AI systems. The development and deployment of such AI-enabled weapons constitute a plausible future risk of harm (injury, death, destruction) due to their autonomous or semi-autonomous capabilities. Since no actual harm or incident is reported yet, but the potential for harm is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the presentation and capabilities of AI-enabled weapons, which inherently carry risk of harm.
Thumbnail Image

Irán.- Irán desvela un nuevo dron con un radio de alcance de 4.000...

2025-01-27
Notimérica
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a drone with military capabilities, including carrying explosives and destroying targets during maneuvers. Such drones typically incorporate AI systems for navigation, targeting, and autonomous operation. While no actual harm or incident is reported, the existence and deployment of such drones with AI capabilities plausibly could lead to harm (injury, death, or disruption) in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it Unrelated since it involves an AI system with potential for harm.