Safety incidents raise concerns over Cruise and Waymo robo-taxis

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Autonomous vehicles from Cruise and Waymo have prompted safety concerns after a Cruise robotaxi injured a pedestrian in San Francisco, triggering a service suspension, and a Waymo vehicle in Phoenix collided with a cyclist. The NHTSA is also investigating 31 unexpected driving behaviors in Waymo’s expanded Arizona service.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Waymo's autonomous vehicle technology) in real-world use. It reports a collision between a Waymo vehicle and a cyclist causing minor injuries, which is a direct harm to a person linked to the AI system's operation. This meets the criteria for an AI Incident as the AI system's use has directly led to injury. Other parts of the article discuss user experiences and safety investigations but do not negate the occurrence of harm. Therefore, the event is classified as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation

In other databases

Articles about this incident or hazard

Thumbnail Image

All hail Phoenix: America's king of the robo-taxi

2024-06-05
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Waymo's autonomous vehicle technology) in real-world use. It reports a collision between a Waymo vehicle and a cyclist causing minor injuries, which is a direct harm to a person linked to the AI system's operation. This meets the criteria for an AI Incident as the AI system's use has directly led to injury. Other parts of the article discuss user experiences and safety investigations but do not negate the occurrence of harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Waymo expands area for its self-driving cars to north Scottsdale, Desert Ridge

2024-06-05
AZ Central
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous driving technology. The federal investigation into 31 incidents of unexpected driving behavior, despite no injuries reported, highlights potential safety issues that could lead to harm in the future. Since no actual harm has occurred, but there is a credible risk of harm due to these unexpected behaviors, the event fits the definition of an AI Hazard. The expansion and rider experience improvements do not themselves constitute harm or hazard, and the investigation is the central point indicating plausible future harm.
Thumbnail Image

Waymo expands area for its autonomous vehicle service to north Scottsdale, Desert Ridge in Arizona

2024-06-07
autotechinsight.ihsmarkit.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving technology) and its use in an expanded service area. The mention of 31 incidents of unexpected driving behavior under investigation by the National Highway Traffic Safety Administration suggests potential safety risks. However, the article does not report any realized harm such as injury, property damage, or other direct consequences. Therefore, this situation represents a plausible risk of harm due to AI system malfunction or unexpected behavior but without confirmed incidents causing harm. This fits the definition of an AI Hazard rather than an AI Incident. The expansion and feature updates alone are routine and unrelated to harm, but the investigation and reported incidents elevate the classification to AI Hazard.
Thumbnail Image

San Francisco's Hot Tourist Attraction: Driverless Cars

2024-06-04
Oman Observer
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous vehicles (Waymo's driverless cars) operating commercially and interacting with the public. It reports a pedestrian injury incident caused by a Cruise vehicle, which led to suspension of operations, and mentions crashes and complaints related to these AI systems. These facts demonstrate that the AI systems' use has directly or indirectly caused harm to people and disruption, fitting the definition of an AI Incident. Although the article also highlights positive aspects and public interest, the presence of actual harm and investigation confirms the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Self-driving cars: A tech miracle or a public safety threat?

2024-06-07
The Fulcrum
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (self-driving cars) whose use has directly led to harm, including pedestrian injuries and deaths, collisions, and public safety disruptions. These harms fall under injury or harm to persons and disruption of critical infrastructure (public roads and emergency services). Therefore, the event qualifies as an AI Incident because the AI systems' use has directly caused significant harm.