Goggo Network Launches Pilot of Autonomous Delivery Robots in Zaragoza

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Goggo Network has begun pilot testing AI-powered autonomous delivery robots in Zaragoza, Spain. The robots, designed for safe, low-speed operation, are undergoing safety and functionality simulations with municipal oversight. No incidents or harm have been reported, but the deployment highlights potential future risks associated with AI-driven robots in public spaces.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous delivery robots) in active use during pilot testing. However, the article only reports ongoing safety simulations and positive initial results without any actual harm or incidents. The focus is on validating safety and acceptance, with no indication of realized injury, property damage, rights violations, or other harms. The coordination with authorities further suggests risk mitigation efforts. Hence, this situation represents a plausible future risk scenario but no realized harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityPrivacy & data governanceAccountabilityTransparency & explainabilityDemocracy & human autonomyRespect of human rights

Industries
Logistics, wholesale, and retailRobots, sensors, and IT hardwareMobility and autonomous vehiclesGovernment, security, and defenceConsumer services

Affected stakeholders
General public

Harm types
Physical (injury)Economic/PropertyReputationalPsychologicalHuman or fundamental rights

Severity
AI hazard

Business function:
LogisticsMonitoring and quality controlResearch and development

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Martin Varsavsky obtiene la primera licencia para operar robots de reparto autónomos en una ciudad española

2022-07-25
Cinco Días
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots with AI-based navigation and obstacle detection) being deployed under a regulated license with safety and public acceptance validation phases. No harm or malfunction is reported, nor is there any indication of plausible imminent harm. The article primarily provides information about the deployment, regulatory approval, and expected benefits, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Llegan los robots autónomos de Goggo a Zaragoza: pronto empezaremos a verlos repartiendo paquetes por sus aceras

2022-07-25
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) in development and testing phases with no reported harm or malfunction. The article does not describe any realized injury, rights violation, disruption, or other harm caused by these AI systems. It also does not indicate any imminent or plausible risk of harm. Therefore, it does not qualify as an AI Incident or AI Hazard. The article provides contextual information about the AI deployment and its potential impact, fitting the definition of Complementary Information.
Thumbnail Image

Robots que reparten comida en Zaragoza, ya son una realidad de la mano de Martin Varsavsky

2022-07-26
ComputerHoy.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) in active use during pilot testing. However, the article only reports ongoing safety simulations and positive initial results without any actual harm or incidents. The focus is on validating safety and acceptance, with no indication of realized injury, property damage, rights violations, or other harms. The coordination with authorities further suggests risk mitigation efforts. Hence, this situation represents a plausible future risk scenario but no realized harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Zaragoza concede a Martin Varsavsky la primera licencia logística para robots autónomos en España

2022-07-25
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous navigation algorithms) in robots operating in public urban environments. Although the project is currently in a testing phase with no reported harm, the deployment of autonomous delivery robots on public sidewalks inherently carries plausible risks of harm to people or property if the AI malfunctions or fails to detect obstacles properly. Since the article focuses on the pilot launch and safety validation without any realized harm, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential safety implications.
Thumbnail Image

¿Es ético usar robots en la guerra?

2022-07-26
La Razón
Why's our monitor labelling this an incident or hazard?
The article describes AI systems (robots with sensors and autonomous firing capabilities) currently deployed in a military context, which could plausibly lead to harm such as injury or death of civilians or soldiers, misuse by non-democratic regimes, or hacking by terrorists. Since no actual harm or incident is reported, but the potential for harm is clearly articulated, this qualifies as an AI Hazard. The discussion is about plausible future harms and ethical concerns rather than a realized AI Incident or a complementary information update.
Thumbnail Image

El futuro del reparto está aquí: Zaragoza acoge una prueba de robots autónomos

2022-07-25
La Información
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven autonomous delivery robots being tested in a city environment. The robots use AI for navigation and obstacle avoidance, indicating the presence of an AI system. However, the article focuses on the pilot testing phase, emphasizing safety validation, human oversight, and collaboration with authorities. There is no mention of any injury, property damage, rights violation, or other harm caused by the AI system. Therefore, this event does not qualify as an AI Incident. It does present a plausible future risk given the nature of autonomous robots operating in public spaces, but since the article does not highlight any near misses or credible warnings of imminent harm, it is best classified as Complementary Information about AI deployment and testing rather than an AI Hazard or Incident.
Thumbnail Image

Llegan a Zaragoza los primeros robots autónomos de reparto a domicilio

2022-07-25
El Periódico de Aragón
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) in their use phase, but there is no indication that any harm has occurred or that there is a plausible risk of harm at this stage. The article focuses on the deployment and testing of the technology, emphasizing safety and controlled implementation. Therefore, this is not an AI Incident or AI Hazard. It is not merely unrelated because it involves AI systems, but since no harm or plausible harm is reported, it fits best as Complementary Information about the AI ecosystem and its deployment.
Thumbnail Image

Los primeros robots de reparto a domicilio de España se prueban en Zaragoza

2022-07-27
foodretail
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) in their development and use phases. However, the article focuses on the initial testing and deployment plans without any mention of actual harm or incidents caused by these AI systems. The safety and acceptance validation phases indicate a proactive approach to risk management. Therefore, while there is a plausible potential for future harm (e.g., accidents or disruptions), no harm has yet occurred or been reported. This fits the definition of an AI Hazard, as the autonomous robots could plausibly lead to incidents in the future if issues arise, but no incident has materialized yet.
Thumbnail Image

Goggo Network inicia una prueba piloto de robots autónomos de reparto en Zaragoza

2022-07-25
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (autonomous delivery robots with AI driving algorithms). The use is in a pilot phase with safety and acceptance testing ongoing, so no direct or indirect harm has yet occurred. The article does not mention any incidents or malfunctions causing injury, rights violations, or other harms. However, the deployment of autonomous robots in public spaces could plausibly lead to incidents in the future (e.g., accidents, disruptions, or safety concerns). Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Robots que reparten comida en Zaragoza, ya son una realidad de la mano de Martin Varsavsky | Tecnología

2022-07-26
Es de Latino News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots) in active testing and deployment. No actual harm or incidents have been reported yet, but the potential for harm exists if the robots malfunction or cause accidents. Therefore, this situation fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harm in the future, even though current tests indicate safety.
Thumbnail Image

Zaragoza se llenará con 80e robots de Goggo Network para realizar entregas

2022-07-27
infodron.es
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous delivery robots with AI navigation) but does not describe any harm or incident caused by their use. The deployment is in a controlled, phased manner with safety validations and operator supervision, indicating no current or imminent harm. The article's main focus is on the rollout and potential benefits, making it an update on AI ecosystem developments rather than an incident or hazard. Hence, it fits the definition of Complementary Information.