Remote-Controlled AI Shuttle Bus Pilot Raises Safety Concerns in Düsseldorf

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Rheinmetall and its subsidiary Mira, in partnership with Rheinbahn, are piloting AI-powered teleoperated shuttle buses in Düsseldorf. While a safety driver is currently onboard, future plans to remove them raise concerns about potential risks if the AI system malfunctions, highlighting plausible hazards in public transport.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (teleoperation for remote vehicle control) actively used in a public setting. While a safety driver is present to intervene, the AI system's operation could plausibly lead to harm if it malfunctions or fails, such as causing accidents on public roads. Since no actual harm or incident is reported, but the potential for harm exists, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the pilot testing and future potential risks rather than reporting any realized harm or incident.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)

Severity
AI hazard

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Nordrhein-Westfalen: Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne

2026-03-26
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (teleoperation for remote vehicle control) actively used in a public setting. While a safety driver is present to intervene, the AI system's operation could plausibly lead to harm if it malfunctions or fails, such as causing accidents on public roads. Since no actual harm or incident is reported, but the potential for harm exists, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the pilot testing and future potential risks rather than reporting any realized harm or incident.
Thumbnail Image

Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne - WELT

2026-03-26
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system or advanced teleoperation system controlling vehicles remotely, which fits the definition of an AI system. The project is a pilot and no harm or incident is reported, but the deployment of remotely controlled vehicles on public roads carries plausible risks of harm (e.g., accidents, injury). Hence, it is an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the deployment and operation of the AI system with potential risk, not on responses or updates to past incidents. It is not unrelated because the AI system is central to the event.
Thumbnail Image

Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne

2026-03-26
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (remotely controlled autonomous shuttles) operating in a public environment, which could plausibly lead to harm such as accidents or safety issues. However, since no harm or incident has occurred yet, and the article only describes the pilot project and its intended operation, this qualifies as an AI Hazard. It highlights a credible risk of future harm due to the deployment of AI-controlled vehicles in public spaces, but no direct or indirect harm has been reported so far.
Thumbnail Image

Düsseldorf: Rheinbahn testet ferngesteuerte Kleinbusse - Zukunft der Mobilität?

2026-03-26
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (teleoperation technology for remotely controlling autonomous shuttles). However, it does not report any harm or incident resulting from the AI system's use or malfunction. The presence of a safety person and the controlled pilot nature of the test indicate risk mitigation. The article focuses on the deployment and evaluation of the technology rather than any harm or plausible harm occurring. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not merely general AI news or product launch since it involves a real-world test with potential implications, but since no harm or plausible harm is described, it is best classified as Complementary Information, providing context and updates on AI system deployment and societal response.
Thumbnail Image

Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne

2026-03-26
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (teleoperation technology for remotely controlling shuttle buses) in active use during testing. The presence of a safety driver currently mitigates direct harm, and no incident or harm has been reported. The article highlights the potential for future operation without safety drivers, which could plausibly lead to harm if the AI system fails. Thus, the event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future but has not yet caused harm.
Thumbnail Image

Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne

2026-03-26
Cash
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (remote control and autonomous vehicle operation) but does not describe any realized harm or direct risk of harm. Since no incident or plausible hazard is reported, and the article mainly announces a pilot project, it fits best as Complementary Information, providing context on AI deployment and innovation without reporting harm or risk.
Thumbnail Image

Rheinmetall-Tochterfirma steuert Shuttlebusse aus der Ferne | NRW

2026-03-26
Start
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (teleoperation technology for remotely controlling vehicles) in active use. Although no harm or incident has been reported, the article discusses the potential for the technology to be used without onboard safety drivers in the future, which could plausibly lead to incidents causing injury or disruption. Since no actual harm has occurred yet, but there is a credible risk of future harm, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the pilot testing and potential implications rather than reporting any realized harm or incident.
Thumbnail Image

Rheinmetall testet ferngesteuerte Shuttlebusse im öffentlichen Verkehr

2026-03-26
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (teleoperation technology for remotely controlling shuttle buses) in a real-world public transportation setting. Although no harm or incident has occurred, the article highlights the potential for future harm if the system is deployed without safety drivers, as failures or malfunctions could lead to accidents or injuries. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to harm in the future, but no direct or indirect harm has yet materialized.
Thumbnail Image

Düsseldorf testet Fernsteuerung von Bussen

2026-03-27
zfk.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for teleoperation and fleet management, but it is a pilot test without any reported harm or malfunction. The article focuses on the potential benefits and the evaluation of the technology under real conditions. Since no harm has occurred and no plausible immediate harm is indicated, this does not qualify as an AI Incident or AI Hazard. It is not unrelated because it involves AI systems, but the main content is about testing and development, making it Complementary Information that provides context on AI deployment and responses to operational challenges in public transport.