Waymo Robotaxis Cause Community Disruption and Safety Incidents in the US

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo's autonomous vehicles have caused community disruption in San Francisco due to noise and erratic driving, prompting resident protests. Separately, in Atlanta, a Waymo robotaxi illegally passed a stopped school bus unloading children, triggering a federal safety investigation and raising concerns about the AI system's reliability and public safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Waymo's autonomous driving technology) whose use directly led to a traffic safety violation involving children, a vulnerable group. The robotaxi's illegal maneuver while operating autonomously without a human driver constitutes a malfunction or failure in the AI system's decision-making. This has caused a direct safety risk, fulfilling the criteria for harm to persons. The regulatory investigation and public concern further underscore the seriousness of the incident. Since harm has already occurred and the AI system's role is pivotal, this is classified as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
General publicChildrenBusiness

Harm types
Physical (injury)Public interest

Severity
AI incident

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Et si la voiture autonome sauvait des vies ?

2025-10-21
Auto Plus
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) in use, with data showing their positive safety impact, thus no harm has occurred. The article does not describe any incident or malfunction causing harm, nor does it warn of plausible future harm. Instead, it provides complementary information about the current state and benefits of AI in autonomous vehicles, as well as regulatory and societal implications. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Les robotaxis Waymo sous enquête après un incident grave avec un bus scolaire

2025-10-20
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous driving technology) whose use directly led to a traffic safety violation involving children, a vulnerable group. The robotaxi's illegal maneuver while operating autonomously without a human driver constitutes a malfunction or failure in the AI system's decision-making. This has caused a direct safety risk, fulfilling the criteria for harm to persons. The regulatory investigation and public concern further underscore the seriousness of the incident. Since harm has already occurred and the AI system's role is pivotal, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Les autorités américaines enquêtent sur un robotaxi Waymo ayant contourné un bus scolaire arrêté

2025-10-21
Fredzone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Waymo's autonomous driving software) whose malfunction or limitation in perception and decision-making directly led to a traffic safety violation involving a school bus with children, which is a recognized harm to health and safety (harm category a). The federal investigation and software update confirm the AI system's role in causing or contributing to the incident. The incident is not merely a potential risk but a realized event with direct safety implications, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo prépare le lancement de ses robotaxis à Londres dès 2026

2025-10-23
Leblogauto.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving AI in robotaxis) whose deployment is planned but not yet realized. No direct or indirect harm has occurred from Waymo's system in London as per the article. However, the nature of autonomous vehicles and their past incidents elsewhere imply a plausible risk of harm in the future. The article's main focus is on the upcoming deployment and testing, regulatory compliance, and safety considerations, which aligns with the definition of an AI Hazard (plausible future harm). It is not an AI Incident because no harm has yet materialized, nor is it Complementary Information since it is not an update on a past incident or governance response. It is not Unrelated because it clearly involves AI systems and their societal impact.
Thumbnail Image

Waymo défie Uber avec ses voitures autonomes pour entreprises

2025-10-22
Déplacements Pros
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) in use, but there is no indication of any injury, rights violation, disruption, or other harm caused or imminent. The article discusses deployment plans, geographic expansion, and regulatory challenges, which are typical of AI ecosystem developments. No incident or hazard is described. Hence, it fits the category of Complementary Information, providing context and updates on AI system deployment and market competition without reporting harm or risk.
Thumbnail Image

Waymo sous le coup d'une enquête fédérale après un incident impliquant un bus scolaire

2025-10-20
KultureGeek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's fifth-generation autonomous driving system) whose malfunction (failure to detect a stopped school bus with flashing red lights) directly led to a dangerous traffic incident with potential harm to children. The NHTSA's investigation and the company's admission of a software flaw confirm the AI system's role in the incident. This meets the criteria for an AI Incident because the AI system's malfunction has directly led to a safety hazard involving potential injury or harm to a group of people (children).
Thumbnail Image

Désespérés de mettre fin aux détours inutiles des véhicules autonomes Waymo et du bruit qu'ils font toute la nuit dans un quartier à San Francisco~? des résidents ont essayé un cône orange avec un panneau

2025-10-21
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Waymo's autonomous vehicles) whose operation has directly caused harm to the community by disturbing residents' sleep and quality of life through noise and unnecessary vehicle maneuvers. The residents' protest and the reported incidents of malfunctioning behavior (circling, noise, illegal maneuvers) demonstrate that the AI system's use and malfunction have led to tangible negative impacts. These impacts fall under harm to communities and health (sleep disruption). The presence of regulatory gaps further underscores the incident's significance. Hence, this is an AI Incident rather than a hazard or complementary information.