Waymo Robotaxi Impedes Emergency Response and Is Shot at During Austin Shootings

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Austin, Texas, a Waymo self-driving taxi blocked emergency vehicles during a fatal mass shooting, briefly delaying ambulance access. In a separate incident, another Waymo robotaxi was shot at while carrying a passenger, causing vehicle damage but no injuries. Both incidents highlight safety and reliability concerns for autonomous vehicles in critical situations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Driverless cars block emergency responders from Austin terror attack

2026-03-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Video shows self-driving Waymo car blocking emergency vehicles...

2026-03-02
New York Post
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system operating autonomously without a driver. Its malfunction or inability to move out of the way blocked emergency vehicles, directly disrupting emergency response operations, which is critical infrastructure. Although the delay was brief and emergency personnel arrived quickly, the AI system's role in obstructing emergency vehicles during a deadly shooting incident constitutes a direct link to harm (disruption of critical infrastructure). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Waymo execs testify at San Francisco City Hall about their vehicles' actions during power outage

2026-03-03
CBS News
Why's our monitor labelling this an incident or hazard?
The Waymo autonomous vehicles are AI systems that rely on communication and human intervention to operate safely. During the power outage, the loss of 5G communication caused the AI systems to fail to operate properly, resulting in vehicles stopping in dangerous locations and blocking emergency access. This disruption to critical infrastructure management and potential safety hazards meet the criteria for an AI Incident, as the AI system's malfunction directly and indirectly led to harm and operational disruption. The event is not merely a potential hazard or complementary information but a realized incident involving AI harm.
Thumbnail Image

Waymo Car Blocks Ambulance Responding to Austin Mass Shooting, on Video

2026-03-02
TMZ
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is a self-driving taxi, clearly an AI system. Its stopping in the middle of the street blocked an ambulance and emergency personnel, directly causing a delay in emergency response to a mass shooting, which is harm to health and disruption of critical infrastructure (emergency services). The incident is documented with video evidence and media reports, confirming the AI system's role in the harm. Although paramedics arrived quickly, the blockage caused a measurable disruption. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Self-Driving Waymo EV Blocked First Responders at Austin Mass Shooting Scene

2026-03-02
Breitbart
Why's our monitor labelling this an incident or hazard?
The autonomous Waymo vehicle, an AI system, obstructed emergency vehicles responding to a mass shooting, causing a delay. This is a direct consequence of the AI system's malfunction or failure to navigate properly in a critical urban environment. The delay in emergency response plausibly exacerbated harm to victims of the shooting, fulfilling the criteria for an AI Incident as the AI system's malfunction directly led to harm (delay in critical emergency services).
Thumbnail Image

Waymo blocks ambulance responding to Austin mass shooting

2026-03-02
Axios
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly led to a harmful outcome: blocking an ambulance responding to an emergency. This obstruction could delay critical medical response, posing a risk of injury or harm to people. The incident is a direct consequence of the AI system's operation in a real-world emergency context, fulfilling the criteria for an AI Incident. Although the ultimate outcome for the ambulance is unclear, the blocking itself constitutes realized harm or at least a significant risk thereof, given the emergency context. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WATCH: Driverless Waymo Taxi Blocks Emergency Response To Deadly Austin Shooting

2026-03-02
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system performing autonomous driving tasks. Its behavior—blocking the road and emergency vehicle—directly caused a delay in emergency response, which is a disruption of critical infrastructure management. The incident is a realized harm, not just a potential risk, as the ambulance was physically blocked and had to reroute. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm through disruption of emergency services.
Thumbnail Image

Waymo Robotaxi Blocks First Responders in Austin Mass Shooting, Raising Fresh Questions About Safety

2026-03-02
autoevolution
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously on public roads. Its malfunction or inability to promptly yield to emergency responders during a mass shooting incident directly led to a blockage that delayed emergency vehicles, which can be considered harm to people (injury or risk thereof). The incident is not merely a potential hazard but a realized event where the AI system's use caused disruption and risk. Hence, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Austin 6th Street shooting: Waymo caught on video blocking responding ambulance

2026-03-02
FOX 4 News Dallas-Fort Worth
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system involved in the event. Its actions temporarily blocked an ambulance, which is critical infrastructure for emergency medical response. Although no injury or harm resulted from this blockage, the AI system's malfunction or decision-making could plausibly lead to harm in similar future scenarios. The event does not describe actual harm caused by the AI system but indicates a credible risk of disruption to emergency services. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Waymo goes viral after blocking EMS during deadly Austin shooting

2026-03-02
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a self-driving car (Waymo) that, during an emergency response to a deadly shooting, blocked an EMS ambulance by getting stuck and failing to move promptly. This is a clear example of an AI system's malfunction or failure to act appropriately in a critical situation, leading to disruption of emergency services and potential harm to people needing urgent care. The AI system's involvement is direct and causally linked to the harm (delay in EMS response). Hence, it meets the criteria for an AI Incident.
Thumbnail Image

WATCH: Waymo robotaxi blocks ambulance during Austin mass shooting

2026-03-02
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its action of blocking an ambulance during a critical emergency response is a direct consequence of its autonomous navigation decisions. This caused a delay in emergency services, which is a disruption of critical infrastructure. The incident is not hypothetical or potential but has occurred, with direct harm resulting from the AI system's behavior. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Waymo's performance during SF power outages scrutinized

2026-03-03
KRON4
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems whose malfunction during the power outage caused them to stop in intersections, blocking traffic and delaying emergency services. This is a direct harm to the management and operation of critical infrastructure (traffic and emergency response). The event involves the use and malfunction of AI systems leading to realized harm, thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Next level dystopian": Waymo robotaxi blocks first responders reacting to Austin mass shooting

2026-03-02
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously to pick up passengers. Its malfunction or failure to appropriately yield to emergency vehicles directly caused a delay in ambulance response to a mass shooting, which is harm to health (a). The incident is documented with video evidence and official confirmation, showing the AI system's involvement in causing the obstruction. The harm is realized, not just potential, as emergency response was impeded. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Supervisors question Waymo over problems during widespread San Francisco outage

2026-03-03
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
Waymo's autonomous vehicles are AI systems that make real-time navigation decisions. Their freezing during the power outage caused traffic blockages, which is a disruption of community infrastructure and public order, constituting harm. The event describes a realized harm caused by the AI system's malfunction, making it an AI Incident.
Thumbnail Image

What would it take for Arlington to get Waymo robotaxis? | ARLnow.com

2026-03-02
ARLnow.com - Arlington, Va. Local News
Why's our monitor labelling this an incident or hazard?
The article describes a scenario where AI systems (Waymo's autonomous vehicles) could be deployed in a new region, but this deployment has not yet happened. There is no mention of any harm or incident caused by the AI system. The main content is about the potential for future AI system use and the legal framework needed to enable it. Therefore, this qualifies as an AI Hazard because the development and potential use of AI systems could plausibly lead to incidents or harms in the future, but no harm has yet occurred.
Thumbnail Image

Waymo fails another emergency test during Austin mass shooting

2026-03-02
MyrtleBeachOnline
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous vehicle technology—and details its malfunction during emergency scenarios. These malfunctions directly caused harm by delaying emergency vehicles and potentially endangering lives, fulfilling the criteria for injury or harm to people (harm category a). The AI system's failure to act appropriately in these critical situations is a direct cause of the harm described. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Waymo Safety Concerns During Emergencies Being Discussed In SF Hearing

2026-03-02
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems whose malfunction during the blackout directly caused traffic congestion and blocked emergency vehicles, posing a risk to public safety and emergency response. This fits the definition of an AI Incident because the AI system's malfunction led to harm to communities and disruption of critical infrastructure management (emergency vehicle access). The hearing and public concerns further confirm the significance of the harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

In Charlotte, S.C., a Bumpy Debut for Waymo Robotaxis

2026-03-02
Government Technology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle technology) in active use, and a collision occurred involving the AI system's vehicle. Although the AI system was under manual control and was not at fault, the incident is a real-world event involving AI system use and a collision, which is a form of harm (potential injury or property damage). The AI system's involvement in the event and the public safety implications meet the criteria for an AI Incident. The incident is not merely a potential risk (hazard) or a complementary information update, but a concrete event involving AI system use and harm (even if minor and caused by a third party).
Thumbnail Image

Waymo Vehicle Blocks Traffic As First Responders Rush to Texas Bar: 'Ram It Out of the Way'

2026-03-02
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (self-driving car) that malfunctioned or failed to respond properly, causing it to block emergency vehicles. This directly disrupted the management and operation of critical infrastructure (emergency response), which fits the definition of harm (b). The incident is clearly described as having occurred, with the AI system's role pivotal in causing the disruption. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo Robotaxi Shot at in Austin: How a Late-Night Shooting Is Testing the Limits of Autonomous Vehicle Safety and Public Trust

2026-03-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) in active use, carrying a passenger when it was attacked by gunfire. The shooting caused damage to the vehicle and endangered the passenger, fulfilling the criteria of harm to a person and property. The AI system's role is pivotal as the autonomous nature of the vehicle is central to the incident and the challenges it presents (no human driver as deterrent). The incident is not merely a potential risk but a realized harm, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Robotaxi Blocks EMS

2026-03-02
710 KURV - The Valley's News/Talk Station
Why's our monitor labelling this an incident or hazard?
The self-driving taxi is an AI system whose malfunction or failure to appropriately respond to emergency vehicle presence caused a delay in EMS operations. Even though the delay was brief and did not significantly impact patient care, the AI system's involvement directly led to disruption of critical infrastructure management and operation, fitting the definition of an AI Incident.
Thumbnail Image

A Waymo Robotaxi Blocked An Ambulance During An Active Shooter Incident In Austin - The Autopian

2026-03-02
The Autopian
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its malfunction—confusion leading to blocking an ambulance during an emergency—demonstrates a failure in AI behavior that could plausibly lead to harm, such as delaying emergency response and causing injury or death. Although no harm occurred here, the incident highlights a credible risk of future harm if such AI behavior is not corrected. Therefore, it fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to an AI Incident in the future. It is not an AI Incident because no actual harm or injury resulted from this event. It is not Complementary Information or Unrelated because the event directly involves an AI system and its malfunction with potential safety implications.
Thumbnail Image

Footage Reveals Self-Driving Waymo Car Impeding Emergency Response During Fatal Austin Shooting - Internewscast Journal

2026-03-02
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the self-driving Waymo taxi) whose malfunction (being stuck and blocking emergency vehicles) indirectly caused harm by impeding emergency response during a fatal shooting incident. Although emergency services arrived quickly, the obstruction caused by the AI system's failure to clear the path represents a disruption of critical infrastructure management and operation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo Self-Driving Car Blocks Ambulance At Fatal Austin Shooting - Officer Jumps In To Move It

2026-03-02
Dallas Express
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system operating autonomously in a complex urban environment. Its decision to execute a U-turn and briefly yield to an ambulance caused a temporary blockage of emergency response vehicles. This constitutes an AI system's use leading indirectly to a disruption of critical infrastructure (emergency response). However, since no injury or harm resulted from this blockage and authorities confirmed no impact on patient outcomes, the event does not meet the threshold of an AI Incident causing realized harm. Instead, it represents a situation where the AI system's use led to a plausible risk of harm or disruption, which was mitigated promptly. Therefore, this event is best classified as an AI Hazard, reflecting a credible risk of harm due to AI system behavior in a critical context, but without actual harm occurring.