Waymo Robotaxi Malfunctions Cause Traffic Disruptions and Emergency Response Interventions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo's autonomous robotaxis have experienced malfunctions in the U.S., including getting stuck during emergencies in California and blocking intersections in Nashville. These incidents disrupted traffic and required intervention from police and firefighters, highlighting the risks and limitations of current AI-driven vehicle systems in critical situations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions Waymo's autonomous vehicles, which are AI systems, causing real harm including a child being struck and unsafe driving behaviors. These are direct harms to health and safety, fitting the definition of an AI Incident. The discussion about regulation and operational practices supports the context but does not change the classification. Therefore, this event is classified as an AI Incident due to the realized harms caused by the AI system's use.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General publicWorkers

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Where B.C. stands on self-driving cars and what recent incidents reveal about safety, rules | CBC News

2026-03-25
CBC News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicle technologies) and discusses a recent incident involving a Tesla driver using driver-assist features, but it does not report a confirmed AI-related harm or accident caused by the AI system itself. The Tesla driver appearing asleep is a human behavior issue rather than a malfunction or misuse of the AI system causing harm. The article also covers regulatory status, lobbying efforts, and potential future impacts, which are forward-looking or contextual. Therefore, the content fits best as Complementary Information, providing updates and context on AI system use, regulation, and safety concerns without describing a new AI Incident or AI Hazard.
Thumbnail Image

As Waymo expands to San Diego, rideshare drivers say they're concerned about safety

2026-03-26
NBC 7 San Diego
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Waymo's autonomous vehicles, which are AI systems, causing real harm including a child being struck and unsafe driving behaviors. These are direct harms to health and safety, fitting the definition of an AI Incident. The discussion about regulation and operational practices supports the context but does not change the classification. Therefore, this event is classified as an AI Incident due to the realized harms caused by the AI system's use.
Thumbnail Image

'No common sense': Ride in a robotaxi shows the promise - and limits - of driverless cars

2026-03-25
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Waymo's autonomous driving system) in use and details incidents where the system's behavior caused or contributed to harm or safety risks, including a collision with a child and other driving errors. These constitute direct or indirect harm to persons, fulfilling the criteria for an AI Incident. The discussion of investigations and safety concerns further supports this classification. Although the article also discusses potential future risks and limitations, the presence of realized harm takes precedence, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo relies on firefighters and police to bail out stuck robotaxis | TechCrunch

2026-03-25
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous vehicle technology—and details multiple real-world events where the AI system's inability to handle complex or emergency situations caused tangible harm. The harms include disruption of emergency responders' primary duties, traffic disruption, and potential safety risks to passengers and the public. The AI system's malfunction or limitations are a direct contributing factor, as the vehicles became stuck or behaved inadequately, necessitating human intervention. This meets the criteria for an AI Incident because harm has occurred and the AI system's role is pivotal in causing it.
Thumbnail Image

Who's driving Waymo's self-driving cars? Sometimes, the police. | TechCrunch

2026-03-25
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Waymo's autonomous driving technology) and describes multiple real-world incidents where the AI system's limitations or malfunctions have led to harm or disruption. The robotaxis getting stuck and requiring police to drive them away during emergencies directly disrupts critical infrastructure management (emergency response). The incorrect remote assistance advice leading to a robotaxi passing a stopped school bus with children loading is a direct safety hazard. These harms are materialized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

VIDEO: Officer moves Waymo after it stalls in Broadway intersection

2026-03-24
WKRN News 2
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) malfunctioning or stalling in traffic, which caused a temporary disruption. However, there was no injury, property damage, or violation of rights reported. The police intervention prevented further disruption. Since no actual harm occurred but there was a plausible risk of traffic disruption, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses broader societal concerns and safety data but does not report a realized harm event.
Thumbnail Image

Waymo: 13x Lower Rate of Serious Injury or Fatality

2026-03-25
CleanTechnica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) whose use has directly led to a reduction in harm (fewer serious or fatal crashes) compared to human drivers. Since the AI system's deployment has demonstrably improved safety and reduced injury and fatality rates, this constitutes an AI Incident under the definition of harm to health of persons resulting from AI system use. The article does not describe any malfunction or hazard but rather a positive impact, so it is an AI Incident reflecting realized harm reduction.
Thumbnail Image

Where B.C. stands on self-driving cars and what recent incidents reveal about safety, rules

2026-03-25
Yahoo
Why's our monitor labelling this an incident or hazard?
The Tesla incident involves an AI system (driver assistance/autopilot technology) whose misuse (driver asleep at the wheel) raises safety concerns. However, the article does not report any actual injury, accident, or harm resulting from this event, only a police reminder to remain attentive. The discussion about Waymo's lobbying and potential future deployment of Level 4 autonomous taxis represents a plausible future risk but no realized harm. Therefore, the Tesla incident itself does not meet the threshold for an AI Incident as no harm occurred, but it does illustrate a potential safety hazard related to AI system misuse. The broader article is primarily informational and contextual, focusing on regulation, safety debates, and future possibilities rather than reporting a new AI Incident or Hazard. Hence, the article is best classified as Complementary Information, as it provides supporting context and updates on AI system use, safety concerns, and governance without describing a new incident or hazard causing or plausibly causing harm.
Thumbnail Image

Waymo Self-Driving Car Gets Stuck In The Middle Of Broadway In Downtown Nashville

2026-03-24
Whiskey Riff
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving AI) malfunctioning during its use, leading to a disruption of traffic management, which is a form of disruption to critical infrastructure (road traffic). Although no injury or damage occurred, the AI system's failure directly caused the vehicle to block an intersection, which is a disruption. Therefore, this qualifies as an AI Incident due to the realized disruption caused by the AI system's malfunction in a critical urban environment.
Thumbnail Image

How Robotaxis Like Zoox, Waymo and Cruise Use Cameras, Radar and Perception Fusion to Power Safer Autonomy on the Road

2026-03-25
Tech Times
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems in robotaxis and their role in improving road safety through advanced perception and decision-making. While it mentions that these systems reduce accidents and are being tested and deployed, it does not describe any actual harm, accident, or malfunction caused by the AI systems. Nor does it present a credible imminent risk or hazard scenario. The content is primarily informative and contextual, focusing on the technology's capabilities, deployment status, and future prospects. Therefore, it fits best as Complementary Information, providing background and ecosystem context rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Waymo relies on firefighters and police to bail out stuck robotaxis - RocketNews

2026-03-25
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle is an AI system as it performs complex real-time navigation and decision-making. The robotaxi's failure to navigate the traffic situation and inability to move despite remote assistance constitutes a malfunction of the AI system. This malfunction directly caused disruption to traffic flow and required police intervention, which is a disruption of critical infrastructure management. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's malfunction and the harm (disruption) caused.
Thumbnail Image

Who's driving Waymo's self-driving cars? Sometimes, the police. - RocketNews

2026-03-25
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous driving system—operating in real-world conditions. The system's inability to navigate an emergency traffic situation caused operational disruption and required police intervention to move the vehicle safely. This is a direct consequence of the AI system's malfunction or failure to act, leading to harm in terms of traffic disruption and potential safety hazards. The event is not merely a potential risk but a realized incident involving AI malfunction and human intervention, fitting the definition of an AI Incident.
Thumbnail Image

Waymo Relies On Firefighters And Police To Bail Out Stuck Robotaxis

2026-03-25
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Waymo's autonomous robotaxis) whose malfunction or operational limitations have directly led to disruption of emergency response operations, a form of harm to critical infrastructure management. The robotaxis' failure to move during emergencies required first responders to intervene physically, diverting their attention and resources from their primary duties, which is a significant harm. The article details multiple such incidents, confirming that this is not an isolated case but a recurring problem. The AI system's role is pivotal as the autonomous vehicles' inability to navigate or be remotely moved in emergencies caused the disruption. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Autonomous driving giant Waymo pushes B.C. to allow self-driving cars on provincial roads

2026-03-27
Times Colonist
Why's our monitor labelling this an incident or hazard?
The article centers on the development and potential deployment of AI-driven autonomous vehicles and the associated regulatory and societal debates. However, it does not describe any realized harm or incident caused by AI systems, nor does it report a near-miss or credible immediate risk event. Instead, it outlines plausible future risks and benefits, making it a discussion of potential hazards and policy responses. Since no direct or indirect harm has occurred yet, and the main focus is on the potential impact and regulatory lobbying, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Is Philly ready to shift gears and share the road with autonomous vehicles?

2026-03-26
PhillyVoice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicles) and discusses its use and potential impacts. However, the article does not describe any realized harm or incident directly or indirectly caused by the AI system in Philadelphia. The mention of past issues in San Francisco serves as context rather than a new incident. The focus is on exploring future implications and regulatory hearings, which aligns with providing complementary information about AI developments and governance responses rather than reporting an AI Incident or AI Hazard.