Glovo Deploys Autonomous Delivery Robots in Madrid, Raising Labor Displacement Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Glovo, in partnership with Goggo Network, launched a pilot of autonomous delivery robots in Madrid in 2022. These AI-powered robots navigate sidewalks and deliver orders, potentially replacing human couriers. The deployment has sparked concerns over labor displacement and regulatory challenges, marking a significant AI-driven shift in urban delivery services.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (autonomous delivery robots) in a real-world environment. Although no harm has yet occurred, the article highlights the experimental nature of the deployment and the need to evaluate interactions with pedestrians, implying potential future risks. Since no actual harm or incident is reported, but plausible future harm exists, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityHuman wellbeingDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retail

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI hazard

Business function:
Logistics

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

¿Adiós repartidores?: así son los nuevos robots con los que Glovo quiere evitar conflictos laborales

2021-12-14
https://www.iproup.com/economia-digital/595-emprendedor-startup-tecnologia-Mercado-Libre-va-de-compras-a-la-provincia-de-Santa-Fe
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous delivery robots) in a real-world environment. Although no harm has yet occurred, the article highlights the experimental nature of the deployment and the need to evaluate interactions with pedestrians, implying potential future risks. Since no actual harm or incident is reported, but plausible future harm exists, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Glovo, Mallorca y Dani García confían en los robots para el reparto de comida

2021-12-13
Cinco Días
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as autonomous delivery robots and a driverless food truck, which are AI systems by definition. The article focuses on their deployment pilots and regulatory context, with no mention of any injury, property damage, rights violations, or other harms having occurred. However, the autonomous nature of these vehicles operating in public spaces implies a credible risk of future harm (e.g., accidents, pedestrian safety issues). Hence, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Conozca los robots que ahora llevan comida y paquetes sin compañía de humanos

2021-12-14
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) being deployed and tested, but there is no indication that any harm has occurred or that an incident has taken place. The article focuses on the introduction and operational plans of these AI systems, including safety measures and monitoring, without reporting any direct or indirect harm. Therefore, this is not an AI Incident. While there is potential for future harm, the article does not emphasize credible or imminent risks or hazards arising from these systems. The main content is about the deployment and operational context, which fits best as Complementary Information, providing context and updates on AI system use and governance in urban delivery.
Thumbnail Image

Llegan los 'riders' autónomos: así son los robots de Glovo que repartirán comida en 2022

2021-12-16
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots with AI navigation). However, it is a pilot test without any reported harm or malfunction. The article emphasizes the experimental nature and regulatory exploration, with no indication of realized or imminent harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It provides complementary information about AI deployment and governance considerations in the delivery sector.
Thumbnail Image

Glovo apuesta por los robots autónomos como nuevos riders a partir de 2022

2021-12-16
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous delivery robots) in a real-world setting, which could plausibly lead to harm such as accidents or disruptions if the robots malfunction or fail to navigate safely. However, since the article only discusses planned deployment and initial testing without any reported incidents or harm, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the introduction and potential risks of the AI system, not on responses or updates to past events. It is not unrelated because the autonomous robots clearly involve AI systems.
Thumbnail Image

¿El fin de los riders? Glovo repartirá comida a domicilio con robots autónomos a partir de 2022 en Madrid

2021-12-16
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots and drones) being used for food delivery, which is a clear AI system involvement. However, the article does not report any injury, rights violation, disruption, or other harm caused by these AI systems. It also does not indicate any plausible future harm or risk arising from their deployment. Therefore, this is not an AI Incident or AI Hazard. The article provides information about new AI applications and their deployment plans, which fits the definition of Complementary Information as it enhances understanding of AI developments and their societal implications without reporting harm or risk of harm.
Thumbnail Image

Glovo prueba 'riders' sin problemas laborales: así funcionan los robots repartidores de Varsavsky

2021-12-14
El Confidencial
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous delivery robots equipped with GPS, cameras, and algorithms for navigation and delivery tasks. However, it only describes a pilot test phase with no reported harm or malfunction leading to injury, rights violations, or other harms. The involvement of AI is in the use phase, but no direct or indirect harm has occurred. The article also discusses regulatory and operational challenges, which are typical of early-stage AI deployment. Since no harm or plausible future harm is described, and the main focus is on the pilot and regulatory context, the event fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Cada vez más vehículos autónomos: ¿es el fin de los repartidores?

2021-12-16
Iprofesional.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous delivery robots and food trucks with AI-based navigation and obstacle recognition). The use of these AI systems is leading to the replacement of human delivery workers, which can be considered a form of labor-related harm (displacement of workers). Since this harm is already occurring as the robots are being deployed and used, it qualifies as an AI Incident under the category of violation of labor rights or harm to groups of people through job displacement. There is no indication that the article is merely about potential future harm (hazard) or a response to a past incident (complementary information). Therefore, the event is best classified as an AI Incident.
Thumbnail Image

A falta de drones, buenos son robots: Glovo iniciará en 2022 un proyecto piloto de robots autónomos de reparto en Madrid

2021-12-14
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous delivery robots with obstacle recognition AI) being deployed in a real-world environment. While no injury, disruption, or rights violation has been reported, the use of such AI systems in public spaces could plausibly lead to incidents in the future, such as accidents or labor displacement. The article focuses on the pilot launch and the potential implications, not on any actual harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Glovo lanza sus primeros robots autónomos de reparto a domicilio junto con Goggo: estarán en las calles de Madrid en 2022

2021-12-13
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) whose development and use are described. However, there is no indication that these systems have caused any harm or incidents yet. The article discusses a pilot launch and regulatory framework facilitating their operation, highlighting potential benefits and safety precautions. Since no realized harm or direct/indirect incidents are reported, but the deployment of AI systems with potential risks is underway, this qualifies as an AI Hazard due to the plausible future risk of harm from autonomous delivery robots operating in public spaces.
Thumbnail Image

Glovo reta a Yolanda Díaz y prueba en Madrid sus repartidores robots sin contrato

2021-12-15
Libre Mercado
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear: an autonomous delivery robot navigating public sidewalks. The event is about its use in a pilot test. No direct or indirect harm has been reported; no injuries, rights violations, or disruptions have occurred. The article highlights the potential for these robots to replace human delivery workers, which could plausibly lead to labor market harms or regulatory challenges in the future. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but has not yet done so. The article is not primarily about responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it involves an AI system with potential societal impact.