Spanish Army Tests AI-Enabled Drones and Robots for Future Combat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Spanish Army is conducting large-scale testing of AI-enabled drones, robots, and autonomous systems at its Viator base in Almería, inspired by warfare in Ukraine. These experiments aim to modernize military capabilities, presenting plausible future risks of harm if such AI systems malfunction or are misused in combat scenarios.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI-enabled military systems being tested for battlefield robotization, including armed drones and UGVs with autonomous capabilities. Although no harm or incident is reported, the nature of these systems—especially armed autonomous platforms—poses a plausible risk of harm to persons, communities, or property if deployed or misused. The development and testing of such AI systems for combat purposes align with the definition of an AI Hazard, as they could plausibly lead to AI Incidents involving injury, violation of rights, or harm to communities. Since no actual harm has occurred yet, the classification as AI Hazard is appropriate.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defenceRobots, sensors, and IT hardware

Affected stakeholders
General publicWorkers

Harm types
Physical (injury)Physical (death)Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

La Legión pone a prueba los últimos avances 'made in Spain' para robotozar el campo de batalla

2026-04-15
Libertad Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled military systems being tested for battlefield robotization, including armed drones and UGVs with autonomous capabilities. Although no harm or incident is reported, the nature of these systems—especially armed autonomous platforms—poses a plausible risk of harm to persons, communities, or property if deployed or misused. The development and testing of such AI systems for combat purposes align with the definition of an AI Hazard, as they could plausibly lead to AI Incidents involving injury, violation of rights, or harm to communities. Since no actual harm has occurred yet, the classification as AI Hazard is appropriate.
Thumbnail Image

El Ejercito de Tierra quiere drones de guerra para adaptarse al combate moderno

2026-04-15
El Progreso de Lugo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI's essential role in processing battlefield data and the use of autonomous drones and unmanned systems for attack, reconnaissance, and logistics. Although no actual harm or incident is described, the deployment and experimentation with AI-enabled military drones and autonomous weapons inherently carry plausible risks of causing injury, disruption, or other harms in combat. The event focuses on the development and use of AI systems with clear potential for harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Hacia el Ejército de Tierra de 2035: el Centro de Fuerza Futura prueba robótica, drones y guerra electrónica en el corazón de la Legión en Almería

2026-04-15
Revista Defensa Infodefensa.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and testing of AI systems (e.g., AI for signal detection and classification, autonomous armed robots, drone interception) but does not describe any realized harm or incident caused by these systems. The article is primarily about experimentation and preparation for future military capabilities, with no indication of injury, rights violations, property damage, or other harms occurring or having occurred. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides detailed context on AI development and military transformation efforts, which informs understanding of the AI ecosystem and potential future impacts without reporting a specific incident or hazard.
Thumbnail Image

Viator, el laboratorio donde el Ejército de Tierra prueba drones inspirados en la guerra de Ucrania - EFE

2026-04-16
Agencia EFE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (unmanned aerial and ground vehicles with autonomous capabilities) being developed and tested by the military. While these systems have potential for significant harm if misused or malfunctioning, the article only describes ongoing experimentation and evaluation without any realized harm or incidents. The focus is on future adaptation and capability development, which fits the definition of an AI Hazard (plausible future harm) rather than an AI Incident. It is not Complementary Information because it is not updating or responding to a prior incident, nor is it unrelated as it clearly involves AI systems with potential military applications.
Thumbnail Image

Robots, IA y drones: la Legión prueba la tecnología española para el Ejército del futuro

2026-04-16
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into military robots and command support tools, confirming AI system involvement. The event concerns the use and testing of these AI systems (use phase). No actual harm or incident is reported; the article focuses on experimentation and capability validation. Given the military context and the nature of the systems (armed robots, counter-drone measures, AI for decision support), there is a credible risk that these AI systems could plausibly lead to harm in future combat scenarios. Since no harm has yet occurred, this fits the definition of an AI Hazard. It is not Complementary Information because the article does not update or respond to a prior incident but rather reports on ongoing testing. It is not Unrelated because AI systems are clearly involved and the potential for harm is credible.
Thumbnail Image

El Ejército prueba el combate del futuro entre robotos y drones en Almería

2026-04-15
El Faro de Ceuta
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (robotics, drones with autonomous or semi-autonomous capabilities) being tested by the military, which fits the definition of AI systems. However, the article only describes experimentation and evaluation without any harm or malfunction leading to injury, rights violations, or other harms. There is also no explicit or implicit indication that these tests have or could plausibly lead to harm. The focus is on technological development and integration, making this a case of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Robotización, guerra electrónica, contra-UAS y conectividad avanzada: El Ejército de Tierra acelera en Almería la incorporación de nuevas capacidades para el combate del futuro

2026-04-15
Defensa.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (robotic and unmanned systems with autonomous capabilities) in their development and testing phase by the military. No harm or violation has occurred yet, nor is there a clear plausible risk of harm described in the article. The focus is on experimentation and capability validation, which supports understanding of AI's evolving role in military operations. This fits the definition of Complementary Information, as it provides supporting context and updates on AI system development and integration without reporting an incident or hazard.
Thumbnail Image

El Ejército de Tierra analiza comprar drones usados en Ucrania dentro de su adaptación al combate moderno

2026-04-15
Andalucía Información
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous drones, AI for data processing and decision support) in a military context, but there is no indication that their use or malfunction has directly or indirectly caused harm. The article discusses future integration and testing phases, which could plausibly lead to harm in the future, but no harm has yet occurred or been reported. Therefore, this situation constitutes an AI Hazard, as the development and potential use of these AI-enabled military systems could plausibly lead to incidents involving harm, but no incident has yet materialized.
Thumbnail Image

Viator, el laboratorio donde el Ejército prueba drones inspirados en la guerra de Ucrania

2026-04-15
La Noción
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous or semi-autonomous drones and UGVs being tested by the military. However, there is no indication that these systems have caused any injury, disruption, rights violations, or other harms. The article focuses on the development and evaluation phase, emphasizing potential future capabilities and adaptation rather than any actual harm or incident. Therefore, this is a plausible future risk context but without any current harm or incident reported. Given the lack of realized or imminent harm, and the focus on testing and development, the event is best classified as Complementary Information, providing context on AI system development and military experimentation without constituting an AI Incident or AI Hazard.
Thumbnail Image

El Ejército se aproxima al frente de combate en Ucrania para integrar nuevas habilidades de combate

2026-04-15
NoticiasDe.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and testing of AI-enabled robotic and drone systems in military operations, which are AI systems by definition. There is no report of direct or indirect harm caused by these systems yet, but their deployment in warfare settings inherently carries a credible risk of harm to human life and property. The focus is on experimentation and preparation for future combat scenarios, indicating plausible future harm rather than realized harm. Hence, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents involving injury, harm, or other significant consequences in armed conflict.
Thumbnail Image

Así Se Prueba En El Laboratorio De Guerra Del Ejército De Tierra La Tecnología Que Cambiará El Combate

2026-04-16
ElPeriodico.digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous drones, robots, AI interfaces) under development and testing, which could plausibly lead to future harms if misused or malfunctioning in combat scenarios. However, no realized harm or incident is described. The article is primarily about the development and testing phase and the strategic vision for future military AI integration, without reporting any direct or indirect harm. Therefore, it fits the definition of an AI Hazard, as the development and potential future use of these AI systems could plausibly lead to incidents involving harm in military contexts.