Waymo Autonomous Vehicle Disrupts Kamala Harris Motorcade

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Waymo autonomous vehicle stalled during a U-turn, blocking Vice President Kamala Harris' motorcade in San Francisco. Police intervened to move the vehicle, highlighting ongoing issues with Waymo cars causing traffic disruptions. This incident raises concerns about the reliability of AI systems in managing urban traffic effectively.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves a deployed AI system (a Waymo driverless car) whose use directly led to harm (vandalism, emotional trauma, risk to passengers). Although the damage was inflicted by humans, it targeted the AI-enabled vehicle and threatened passenger safety, qualifying as an AI Incident under harm to persons and property.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesGovernment, security, and defence

Affected stakeholders
GovernmentGeneral publicBusiness

Harm types
Public interestReputationalEconomic/Property

Severity
AI incident

Business function:
Monitoring and quality controlMaintenance

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning

In other databases

Articles about this incident or hazard

Thumbnail Image

Terrifying footage shows gang attacking driverless car with passengers

2024-10-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves a deployed AI system (a Waymo driverless car) whose use directly led to harm (vandalism, emotional trauma, risk to passengers). Although the damage was inflicted by humans, it targeted the AI-enabled vehicle and threatened passenger safety, qualifying as an AI Incident under harm to persons and property.
Thumbnail Image

A Stalled Waymo Blocked Kamala Harris's Motorcade in San Francisco

2024-10-01
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event stems from a self-driving car’s AI system malfunctioning in operation, which directly disrupted a government motorcade. This real-world disruption of vehicular traffic due to an AI system’s failure constitutes an AI Incident.
Thumbnail Image

Woman trapped inside a driverless Waymo car as two men 'harassed her'

2024-10-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
This is a real incident involving an AI system (Waymo’s driverless car) that directly led to harm (psychological distress and potential physical danger) because the vehicle could not navigate away from threatening individuals. The event reflects a failure or limitation of the AI system in protecting a passenger, satisfying the criteria for an AI Incident.
Thumbnail Image

A Waymo robotaxi stalled in front of VP Harris' motorcade | TechCrunch

2024-09-30
TechCrunch
Why's our monitor labelling this an incident or hazard?
This event involves an AI system (Waymo’s autonomous-driving software) whose malfunction directly disrupted a critical government operation (the VP’s motorcade). The harm—disruption of a critical infrastructure operation—has already occurred, making this an AI Incident rather than a potential hazard or complementary info.
Thumbnail Image

Waymo self-driving car interrupts Vice President Harris' motorcade in San Francisco

2024-09-30
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The article describes a real-world malfunction of an AI system (Waymo’s autonomous vehicle) that directly disrupted a high-profile motorcade and posed safety and security risks. It also references past incidents where Waymo and other self-driving cars caused collisions or injuries. This constitutes an AI Incident under the category of disruption of critical operations and potential harm.
Thumbnail Image

Waymo taxi stalls in front of Kamala motorcade

2024-10-01
Washington Times
Why's our monitor labelling this an incident or hazard?
The incident stems from the malfunction of an AI system in operation (the Waymo autonomous vehicle) and directly disrupted a critical transportation and security operation (the VP’s motorcade). This is a realized harm (traffic and security disruption) caused by an AI system’s failure.
Thumbnail Image

Waymo Autonomous Vehicle Stops Kamala Harris Motorcade -- Authorities Investigate Multiple Similar Events

2024-09-30
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The article describes actual malfunctions of an AI system (the Waymo autonomous vehicle) that directly caused traffic disruption—an incident of harm to the operation of critical infrastructure (road traffic). These are realized harms due to the AI’s failure to act under certain conditions, making this an AI Incident.
Thumbnail Image

Vandals Attack Driverless Waymo Car With Passengers Inside

2024-10-02
Inside Edition
Why's our monitor labelling this an incident or hazard?
The event describes an attack on a driverless taxi, which is an AI system. The passengers experienced panic and distress, which is harm to persons. The AI system's operation was directly involved, as the incident happened while the AI system was in use transporting passengers. The vandalism caused harm indirectly linked to the AI system's use. Hence, this is an AI Incident.
Thumbnail Image

Waymo Is Opening Up Driverless Rides in Austin. Here's Everywhere You Can Hail the Robotaxi

2024-10-02
CNET
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Waymo's autonomous driving AI. However, it does not describe any new incidents where the AI system's use or malfunction has directly or indirectly caused harm. The past collisions mentioned are historical context and not the main focus of the article. The current content is primarily about the expansion and operational details of the AI system, which is informative and contextual but does not report new harm or credible imminent harm. Therefore, it is best classified as Complementary Information, as it provides updates and context about AI deployment and safety efforts without describing a new AI Incident or AI Hazard.
Thumbnail Image

Men Stall Waymo Driverless Car to Catcall Female Passenger Inside

2024-10-01
PC Magazine
Why's our monitor labelling this an incident or hazard?
The Waymo driverless car is an AI system whose autonomous driving behavior (stopping when pedestrians are detected) directly led to the passenger being stuck and subjected to harassment by catcalling men. The AI system's inability to respond to the pedestrians' refusal to move or to navigate around them resulted in harm to the passenger's safety and well-being. This harm is realized and directly linked to the AI system's use, meeting the criteria for an AI Incident under harm to a person.
Thumbnail Image

New law will allow cops to cite driverless vehicles for traffic violations

2024-09-30
KRON4
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous vehicles whose use has directly led to harm, including injuries to a pedestrian. The law and regulatory actions are responses to these harms. Since the article describes actual incidents of harm caused by AVs and regulatory responses, this qualifies as an AI Incident. The involvement of AI systems (autonomous driving technology) is explicit, and the harms (injury to a person, disruption of emergency response) have occurred. The new law and enforcement actions are part of the response but do not negate the fact that incidents with harm have taken place.
Thumbnail Image

Of Course a Waymo Got In the Way of Kamala Harris's Motorcade on Nob Hill Friday Night

2024-09-30
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Waymo's self-driving cars) malfunctioning during their operation, causing a disruption to a critical movement (a vice presidential motorcade). While no injury or damage occurred, the malfunction plausibly could have led to harm or more serious disruption. The police had to intervene to resolve the situation, indicating a failure of the AI system to operate safely and autonomously. There is no indication of realized harm such as injury, property damage, or rights violations, so it does not meet the threshold for an AI Incident. Instead, it represents a credible risk or near-miss scenario, fitting the definition of an AI Hazard.
Thumbnail Image

Woman In Driverless Waymo Harassed By Bros Wanting Her Phone Number

2024-10-02
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) whose operational behavior (freezing and standing still when blocked) directly contributed to the rider being harassed and effectively held hostage by pedestrians. The harm is realized as psychological distress and restriction of freedom, which falls under harm to a person or community. The AI system's malfunction or operational limitation in handling such social interference is a contributing factor. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

S.F. woman's viral video shows her trapped in a Waymo by men asking for her number

2024-10-02
The Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo autonomous vehicle) whose use directly led to a harm scenario: the passenger was trapped and harassed because the vehicle could not move or respond to the threatening situation. The harm is to the person's safety and well-being, fitting the definition of harm to a person. The AI system's inability to act in this context is a malfunction or limitation contributing to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Where did a Waymo robotaxi get stuck in front of VP Kamala Harris' motorcade?

2024-09-30
Government Technology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an autonomous Waymo vehicle, which malfunctioned by getting stuck and blocking a motorcade. This malfunction directly disrupted the operation of a critical infrastructure element (the motorcade route), causing a temporary obstruction. The police officer had to manually drive the vehicle out of the way, indicating a failure of the AI system to perform as intended. Therefore, this qualifies as an AI Incident due to the direct disruption caused by the AI system's malfunction.
Thumbnail Image

Vice President Gets Stuck Behind Stalled Driverless Robotaxi at San Francisco

2024-09-30
The New York Sun
Why's our monitor labelling this an incident or hazard?
The driverless robotaxi is an AI system performing autonomous navigation. Its malfunction (getting stuck and blocking the motorcade) directly caused disruption to critical infrastructure (the vice presidential motorcade). The police intervention confirms the AI system failed to perform as intended, leading to harm. The article also references other safety incidents involving similar AI systems, reinforcing the classification. The new law allowing citations for driverless cars further supports recognition of these malfunctions as incidents with real-world harm. Hence, this is an AI Incident.
Thumbnail Image

A Waymo brought Kamala Harris' motorcade to a standstill in SF

2024-09-28
The San Francisco Standard
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system as it is a driverless car using autonomous AI technology. The malfunction caused a traffic jam, disrupting normal traffic flow, which qualifies as disruption of critical infrastructure operation. Although no physical injury is reported, the disruption itself is a recognized harm under the framework. Therefore, this event is an AI Incident due to the direct malfunction of an AI system causing harm.
Thumbnail Image

Woman trapped in Waymo by catcallers raises safety concerns

2024-10-02
Newsweek
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system operating autonomously. The incident involved the AI system's use (the autonomous vehicle stopping and remaining stalled) which indirectly led to harm in the form of psychological distress and safety risk to the passenger. The AI system's inability to respond to the threatening behavior (e.g., no manual override by the passenger, no intervention to avoid the threat) contributed to the harm. This fits the definition of an AI Incident as the AI system's use directly or indirectly led to harm to a person. The event is not merely a hazard or complementary information, as the harm has occurred and is documented. Therefore, the classification is AI Incident.