Automated Drone Traffic Management to Launch in A Coruña and Ferrol by 2026

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Instituto Tecnológico de Galicia (ITG) is developing AI-driven systems to enable automated, coordinated drone traffic in A Coruña and Ferrol by 2026. The technology aims to safely manage hundreds of commercial drones for services like deliveries and emergency response, presenting future risks if the AI systems malfunction.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-related technology for automated UAV traffic management, including software systems (USSP) that coordinate drone flights. While the technology is under development and testing, no incidents or harms have been reported. The potential for future harm exists given the complexity and scale of drone operations in urban airspace, which could lead to injury, disruption, or other harms if failures occur. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but has not yet done so.[AI generated]
AI principles
SafetyRobustness & digital securityPrivacy & data governanceAccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retailGovernment, security, and defenceRobots, sensors, and IT hardwareIT infrastructure and hostingDigital security

Affected stakeholders
General publicConsumers

Harm types
Physical (death)Physical (injury)Economic/PropertyPublic interestHuman or fundamental rightsReputationalPsychological

Severity
AI hazard

Business function:
LogisticsMonitoring and quality controlMaintenanceResearch and development

AI system task:
Goal-driven organisationReasoning with knowledge structures/planningEvent/anomaly detectionRecognition/object detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Cientos de drones podrán volar sobre A Coruña y Ferrol en 2026 gracias a tecnología del ITG

2024-04-10
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-related technology for automated UAV traffic management, including software systems (USSP) that coordinate drone flights. While the technology is under development and testing, no incidents or harms have been reported. The potential for future harm exists given the complexity and scale of drone operations in urban airspace, which could lead to injury, disruption, or other harms if failures occur. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but has not yet done so.
Thumbnail Image

La tecnología de ITG permitirá el tráfico automatizado de drones en A Coruña y Ferrol a partir de 2026

2024-04-11
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DALIAH) designed for automated drone traffic management, which qualifies as an AI system under the definitions. The event concerns the development and planned use of this AI system to manage UAV traffic safely and efficiently. While the system is intended to improve safety and coordination, the deployment of automated drone traffic control inherently carries risks that could plausibly lead to incidents such as collisions or disruptions. Since no actual harm or incident is reported, but the system's future use could plausibly lead to harm, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses to past incidents or legal/governance developments, so it is not Complementary Information. It is clearly related to AI, so it is not Unrelated.
Thumbnail Image

Tecnología gallega para dirigir el tráfico aéreo de drones comerciales

2024-04-11
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as managing drone air traffic through real-time decision-making and coordination. However, the article does not report any realized harm or incidents caused by the AI system. Instead, it presents the system as a preventive tool to enable safe drone operations and avoid potential accidents or disruptions. Therefore, the event represents a plausible future risk mitigation scenario rather than an actual incident. Given the credible potential for harm if such systems were absent or malfunctioning, but no harm has yet occurred, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

El centro ITG permitirá el tráfico automático de drones entre la ciudad y Ferrol

2024-04-11
La Opinion A Coruña - laopinioncoruna.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated drone traffic management, which is a complex AI application involving real-time decision-making and coordination. However, the article does not report any realized harm or incident resulting from the AI system's development or use. Instead, it describes a future deployment and ongoing validation efforts. While there is potential for future harm if such systems malfunction or are misused, the article does not indicate any current or imminent harm or hazard. Therefore, this is best classified as Complementary Information, providing context on AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

A Coruña contará con tráfico automatizado de drones en 2026

2024-04-10
El Ideal gallego
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI systems for automated drone traffic management and drone operation. However, the article does not report any realized harm or incidents caused by these AI systems. Instead, it discusses future deployment and validation efforts, implying potential future use but no current harm or incident. Therefore, this qualifies as an AI Hazard because the deployment of automated drones could plausibly lead to incidents or harms in the future, such as accidents or disruptions, but no such harm has yet occurred or been reported.