Autonomous Transporters to Be Tested in Braunschweig

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Several autonomous electric transporters (U-Shift) from the DLR’s IMoGer project will be tested in Braunschweig’s Schwarzer Berg district with €35 million federal funding. The unpiloted vehicles, monitored for safety, aim to support last-mile logistics around the clock and gather data for similar urban and rural deployments.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous transporters) being introduced and tested, which fits the definition of AI systems. Since the vehicles are not yet deployed and no harm or malfunction has been reported, there is no AI Incident. However, the deployment of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, disruptions), making this an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems.[AI generated]
AI principles
SafetyRobustness & digital securityPrivacy & data governanceAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retailGovernment, security, and defence

Harm types
Physical (injury)Economic/PropertyHuman or fundamental rightsReputational

Severity
AI hazard

Business function:
LogisticsMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detectionGoal-driven organisationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Millionen-Projekt: Testflotte fahrerloser Transporter kommt nach Braunschweig

2025-02-17
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous transporters) being introduced and tested, which fits the definition of AI systems. Since the vehicles are not yet deployed and no harm or malfunction has been reported, there is no AI Incident. However, the deployment of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, disruptions), making this an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems.
Thumbnail Image

Fahrerlose Transporter sollen in Braunschweig erprobt werden - WELT

2025-02-17
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous transporters) being tested in a real-world environment. While no harm or malfunction is reported, the deployment of such systems inherently carries risks that could plausibly lead to harm, such as accidents or operational failures. The article focuses on the planned testing and safety monitoring, indicating potential future risks rather than current incidents. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Fahrerlose Transporter sollen in Braunschweig erprobt werden

2025-02-17
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous transporters) being deployed for testing, which fits the definition of AI systems. Since the vehicles have not yet been tested under real conditions and no harm or malfunction is reported, there is no direct or indirect harm at this stage. However, the deployment of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, disruptions), making this an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Fahrerlose Transporter sollen in Braunschweig erprobt werden

2025-02-17
Hamburger Abendblatt
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for autonomous driving (autonomous transporters) which are planned to be tested in a real environment. Although no harm has yet occurred, the deployment of such AI systems could plausibly lead to incidents involving injury, disruption, or other harms. The article focuses on the upcoming test and the potential benefits, with safety monitoring in place, but does not report any actual harm or incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Fahrerlose Transporter sollen in Braunschweig erprobt werden

2025-02-17
Weser Kurier
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous transporters, which are AI systems due to their autonomous driving capabilities. The event is about the planned testing and deployment of these systems, with safety monitoring in place. No harm or incident is reported; the article discusses the project and its potential benefits. Since no harm has occurred but the use of AI systems in public transport could plausibly lead to incidents in the future, this qualifies as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event.