Autonomous Shuttles to Begin Public Transport Trials in Hesse, Germany

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Deutsche Bahn and partners will deploy AI-driven autonomous shuttles for public transport in Darmstadt and Offenbach starting May. Initial tests will include safety drivers, with plans for fully driverless, on-demand service. While no incidents have occurred, the project introduces plausible future risks associated with autonomous vehicle operation in real traffic.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (fully autonomous vehicles) in public transportation. However, the article only discusses the planned testing and deployment phases without reporting any harm or incidents caused by these AI systems. Since no harm has occurred yet but the deployment could plausibly lead to future AI incidents (e.g., accidents, safety issues), this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses to past incidents or broader governance issues, so it is not Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Mobility and autonomous vehicles

Harm types
Physical (injury)Physical (death)

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Fahrerlose Autos sollen Nahverkehrskunden in Rhein-Main mitnehmen

2023-02-22
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (fully autonomous vehicles) in public transportation. However, the article only discusses the planned testing and deployment phases without reporting any harm or incidents caused by these AI systems. Since no harm has occurred yet but the deployment could plausibly lead to future AI incidents (e.g., accidents, safety issues), this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses to past incidents or broader governance issues, so it is not Complementary Information.
Thumbnail Image

Pilotprojekt mit autonomen Shuttles im Rhein-Main-Gebiet

2023-02-22
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous shuttles) in development and use phases, but no harm or malfunction is reported. The article focuses on the planned deployment and testing, with safety drivers initially present and a goal to improve mobility and climate impact. There is no direct or indirect harm described, nor a credible warning of plausible future harm specific to this project. Hence, it does not meet criteria for AI Incident or AI Hazard. It is an informative update on AI use in public transport, fitting the definition of Complementary Information.
Thumbnail Image

"Das ist keine Science-Fiction": Bahn testet autonome On-Demand-Fahrzeuge im normalen Straßenverkehr

2023-02-22
Spiegel Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles with Level 4 self-driving capabilities) in development and testing phases. However, there is no indication that any harm has occurred or that the AI systems have malfunctioned or caused injury, disruption, or rights violations. The article focuses on the testing and future integration of these vehicles, which could plausibly lead to harm in the future but currently do not. Therefore, this qualifies as an AI Hazard, as the autonomous vehicles could plausibly lead to incidents once fully operational, but no incident has yet occurred.
Thumbnail Image

Autonome Shuttles sollen ab Mai für die DB testweise durch Hessen fahren

2023-02-24
heise online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving AI by Mobileye) being deployed in public transport. The article focuses on planned testing and future operation, with no mention of any harm, malfunction, or incident. Since autonomous vehicles have inherent risks that could plausibly lead to harm (e.g., accidents, injuries), the event qualifies as an AI Hazard. It is not an AI Incident because no harm has occurred yet. It is not Complementary Information because it is not an update or response to a prior incident, but a new planned deployment. It is not Unrelated because it clearly involves AI systems and potential risks.
Thumbnail Image

Revolution im ÖPNV: Fahrerlose Autos bald im Rhein-Main-Gebiet

2023-02-22
hessenschau.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) in development and use, but no harm or malfunction is reported. The article focuses on the planned introduction and benefits of these AI-driven vehicles, with no mention of accidents, rights violations, or other harms. There is also no explicit or implicit indication that these vehicles could plausibly lead to harm. Hence, it does not meet the criteria for AI Incident or AI Hazard. It is an informative update on AI deployment, fitting the definition of Complementary Information.
Thumbnail Image

Autonomes Fahren: "Revolution" in Kreis Offenbach und Darmstadt

2023-02-22
op-online.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) being developed and tested for public transport use. No harm or incident has occurred yet, but the autonomous operation in real traffic implies plausible future risks of harm (e.g., accidents). The article does not report any actual harm or malfunction, so it is not an AI Incident. It is not merely complementary information because the focus is on the upcoming deployment and testing, which could plausibly lead to harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Pilotprojekt mit autonomen Shuttles im Rhein-Main-Gebiet

2023-02-22
op-online.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI in shuttles) in development and use phases. However, no harm or malfunction has occurred or is reported. The article focuses on the planned pilot project and its expected benefits, not on any incident or risk. Hence, it does not meet the criteria for AI Incident or AI Hazard. It fits the definition of Complementary Information as it provides supporting context about AI deployment and societal responses (pilot testing, funding, integration into public transport).