Waymo Robotaxi AI Leaves Passengers Trapped During Vehicle Attacks in San Francisco

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo's autonomous vehicle AI left passengers trapped and vulnerable during attacks by anti-AI individuals in San Francisco. The AI's cautious programming prevented the vehicle from escaping, exposing passengers to harm. The lack of remote override or human control exacerbated the safety risk.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Waymo's autonomous driving system) whose use directly led to a dangerous traffic situation that endangered the health and safety of passengers and other drivers. The AI system's attempt to make an unprotected left turn across oncoming traffic on a complex road without traffic signals is a known challenge for autonomous vehicles and here resulted in a near-accident scenario. The passengers' fear and the forced braking of other vehicles demonstrate direct harm or risk of harm. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing or nearly causing injury or harm to people.[AI generated]
AI principles
SafetyDemocracy & human autonomy

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)Psychological

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Arizona Woman Tries Waymo For The First Time. Then It Almost Costs Her Life

2026-03-14
Motor1.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) whose use directly led to a dangerous traffic situation that endangered the health and safety of passengers and other drivers. The AI system's attempt to make an unprotected left turn across oncoming traffic on a complex road without traffic signals is a known challenge for autonomous vehicles and here resulted in a near-accident scenario. The passengers' fear and the forced braking of other vehicles demonstrate direct harm or risk of harm. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing or nearly causing injury or harm to people.
Thumbnail Image

What Sundar Pichai's $692 Million Pay Package Says About Alphabet's Next Chapter

2026-03-18
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article centers on corporate financial and strategic information related to AI-driven business growth but does not report any realized or potential harm caused by AI systems. There is no mention of injury, rights violations, infrastructure disruption, or other harms directly or indirectly linked to AI. The focus is on future business prospects and executive incentives, which falls outside the scope of AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context about AI ecosystem developments without describing an incident or hazard.
Thumbnail Image

Former Uber CEO says Waymo 'obviously' ahead of Tesla in robotaxi race

2026-03-18
Electrek
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous driving AI) and discusses their development and deployment. However, it does not report a specific AI Incident (no direct or indirect harm event detailed) nor an AI Hazard (no new plausible future harm event described). Instead, it offers expert commentary, comparative analysis, and strategic insights about the AI systems and companies involved, which fits the definition of Complementary Information. The mention of Tesla's crash reports is background information without detailed incident analysis or new harm revelation. The article also discusses industry challenges and potential risks but does not present a new hazard event.
Thumbnail Image

Waymo robotaxi blocks EMS

2026-03-17
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its stopping inside a railroad crossing near active train tracks constitutes a malfunction or failure in its operation, creating a hazardous situation. Although no injury or damage occurred, the near-miss with a train indicates a plausible risk of harm to people and property. Since the event describes a real occurrence with direct involvement of an AI system leading to a dangerous situation, it qualifies as an AI Incident due to the direct or indirect risk of harm realized in the event.
Thumbnail Image

Waymo Driverless Taxi Narrowly Avoids Disaster At Railroad Crossing - BGR

2026-03-18
BGR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Waymo driverless car) whose use led to a near-miss incident at a railroad crossing. While no actual harm occurred, the AI's behavior created a hazardous situation that could plausibly lead to an AI Incident if a similar scenario resulted in a collision or injury. Therefore, this qualifies as an AI Hazard because it demonstrates a credible risk of harm due to the AI system's operation, but no harm has yet materialized.
Thumbnail Image

Rideshare drivers say Waymo is giving rides to unaccompanied kids, violating state permit

2026-03-18
ABC7 News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Waymo's autonomous vehicles) and concerns their use (transporting unaccompanied minors). The complaint alleges violation of permit rules, implying a failure to comply with legal frameworks. While no actual harm or incident is reported, the situation plausibly could lead to harm if unaccompanied minors are transported unsafely. Therefore, this qualifies as an AI Hazard because it describes a circumstance where AI system use could plausibly lead to harm or regulatory violations, but no harm has yet been reported.
Thumbnail Image

Complaint filed over Waymo allegedly transporting minors alone

2026-03-18
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as Waymo's driverless taxis rely on AI for autonomous operation. The complaint concerns the use of these AI systems transporting unaccompanied minors, which could plausibly lead to harm (e.g., safety risks to minors). Since no actual harm or incident is reported, but there is a credible concern about regulatory violations and public safety risks, this qualifies as an AI Hazard rather than an AI Incident. The complaint and request for enforcement indicate a potential for harm but do not confirm that harm has occurred.
Thumbnail Image

What Happens When the Waymo You're Riding In Gets Attacked By a Robot Hater? Not Much, and You're Sort of Trapped

2026-03-17
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving software) whose use directly led to harm—passengers trapped and endangered during attacks on the vehicle. The AI system's cautious programming, while intended for safety, indirectly contributed to the harm by preventing the vehicle from moving away from attackers, leaving passengers vulnerable. The lack of remote override or human control exacerbated the situation. The harm is to the safety and well-being of passengers, fitting the definition of injury or harm to persons. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo's Call for National Robotaxi Standards Draws Rebuke From Tesla Engineer

2026-03-16
EV
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicle AI) and their development and use, but it does not describe any direct or indirect harm caused by these AI systems. There is mention of regulatory probes and calls for safety standards, which relate to potential risks and governance responses, but no actual incident or harm is reported. The focus is on industry competition, regulatory calls, and public trust building, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI in autonomous vehicles.
Thumbnail Image

Trapped in a Self-Driving Car During an Anti-Robot Attack

2026-03-17
The New York Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a self-driving car, which is an AI system operating autonomously. The AI system's behavior (stopping when a person is detected) directly led to the passengers being trapped and vulnerable during an attack, constituting harm to persons. The attacker exploited the AI system's safety feature, resulting in a threatening and unsafe situation. This direct link between the AI system's operation and the harm experienced by the passengers meets the definition of an AI Incident.
Thumbnail Image

Trapped! Inside a Self-Driving Car During an Anti-Robot Attack

2026-03-18
GV Wire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a self-driving car) whose autonomous behavior (stopping when a person is near) was exploited by an attacker, resulting in passengers being trapped and threatened. The AI system's design and operational policies directly contributed to the harm experienced by the passengers, fulfilling the criteria for an AI Incident. The harm includes psychological distress and a safety risk due to inability to escape, which falls under injury or harm to persons. The incident is not merely a potential risk but a realized harm, distinguishing it from an AI Hazard. It is not Complementary Information since the main focus is the incident itself, nor is it Unrelated as the AI system is central to the event.
Thumbnail Image

Waymo comes to Charlotte -- the safety and security behind autonomous driving cars on our roads

2026-03-16
WFAE 90.7 - Charlotte's NPR News Source
Why's our monitor labelling this an incident or hazard?
The article centers on the deployment and societal considerations of autonomous vehicles powered by AI but does not describe any actual harm or malfunction caused by these AI systems. It discusses potential benefits and concerns but does not report any realized injury, rights violations, or other harms. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it serves as complementary information that enhances understanding of the AI ecosystem, public perception, and safety considerations related to autonomous vehicles.
Thumbnail Image

Trapped! Inside a Self-Driving Car During an Anti-Robot Attack.

2026-03-17
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a self-driving car) whose operational behavior (stopping when a person is nearby) was exploited by an attacker, leading to passengers being trapped and threatened. The harm is indirect but significant, involving passenger safety and psychological harm during the attack. The AI system's design and use directly contributed to the incident, fulfilling the criteria for an AI Incident. There is no indication that this is merely a potential risk (hazard) or a complementary information update; the harm has occurred. Thus, the classification is AI Incident.
Thumbnail Image

Bumpy path predicted for self-driving cars in Australia

2026-03-19
Perth Now
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicles and their potential impacts. The harms discussed are potential and not realized, such as increased pollution and traffic due to misuse or poor policy. The research and regulatory considerations indicate a credible risk that autonomous vehicles could lead to harms in the future. Therefore, this qualifies as an AI Hazard because it plausibly could lead to AI Incidents if not managed properly. There is no indication of an actual incident or realized harm, nor is the article primarily about responses to past incidents, so it is not an AI Incident or Complementary Information.