Robotaxi Malfunctions and Safety Concerns in San Francisco Highlight AI Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Self-driving taxis operated by Cruise and Waymo in San Francisco have experienced AI malfunctions, including navigation errors and stranding passengers, raising public safety concerns. These incidents, along with reports of collisions and emergency service disruptions, have led to regulatory scrutiny and local backlash as expansion plans are considered.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI-powered autonomous vehicles (robotaxis) operating in San Francisco. It reports data indicating collisions with injuries and a fatality (dog killed), as well as traffic disruptions and interference with emergency services caused by the vehicles' erratic behavior. These harms are directly or indirectly linked to the AI systems controlling the vehicles. The regulatory debate and public safety concerns revolve around these realized harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms have already occurred and are documented.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesGovernment, security, and defence

Affected stakeholders
ConsumersGeneral publicGovernment

Harm types
Physical (injury)PsychologicalEconomic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Citizen/customer serviceMonitoring and quality control

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

San Francisco drives tech; will it drive away robot taxis?

2023-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicles) whose use is under scrutiny due to safety concerns and reported collisions with injuries at rates higher than average. However, no specific AI Incident (harm directly or indirectly caused by the AI systems) is detailed as having occurred beyond general concerns and data disputes. The situation represents a plausible risk of harm from AI system use, making it an AI Hazard. The article primarily focuses on the potential for harm and regulatory decisions rather than a realized incident or a response to a past incident, so it is best classified as an AI Hazard.
Thumbnail Image

San Francisco drives tech; will it drive away robot taxis?

2023-08-07
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered autonomous vehicles (robotaxis) operating in San Francisco. It reports data indicating collisions with injuries and a fatality (dog killed), as well as traffic disruptions and interference with emergency services caused by the vehicles' erratic behavior. These harms are directly or indirectly linked to the AI systems controlling the vehicles. The regulatory debate and public safety concerns revolve around these realized harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms have already occurred and are documented.
Thumbnail Image

Tech journalist says ride in driverless car turned into nightmare

2023-08-05
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The driverless car is an AI system as it autonomously navigates and makes real-time decisions. The malfunction during the ride, where the vehicle accelerated away and refused to let the passenger out, directly led to harm in the form of distress and potential physical risk. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm to a person. The article also mentions other incidents involving these vehicles, reinforcing the presence of realized harm from AI system use.
Thumbnail Image

UPDATE 1-San Francisco drives tech; will it drive away robot taxis?

2023-08-08
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—autonomous vehicles operated by Waymo and Cruise—and discusses their development and use in a real urban environment. It reports actual incidents of collisions with injuries and a fatality, as well as operational behaviors causing traffic and emergency service disruptions. These constitute direct or indirect harms to people and community safety, fulfilling the criteria for an AI Incident. The regulatory and public safety concerns further underscore the realized harms rather than just potential risks. Hence, the event is best classified as an AI Incident.
Thumbnail Image

California vote on self-driving taxis could alter the future of AI

2023-08-05
NBC News
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of autonomous driving technology used by Waymo and Cruise. The vote concerns the use of these AI systems in public transportation, which could plausibly lead to harms such as traffic accidents, disruption of city infrastructure, or labor displacement. Since no actual harm has been reported yet, but the expansion could foreseeably lead to such harms, this situation fits the definition of an AI Hazard. The article does not describe a realized incident or harm, nor is it primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Cruise inches into Waymo's territory in the Phoenix area | TechCrunch

2023-08-08
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their use (deployment and expansion of robotaxi services). However, the article does not report any realized harm or incident caused by these AI systems, nor does it highlight any credible risk or hazard that could plausibly lead to harm. It is primarily an update on the expansion and competitive landscape of AI-driven robotaxi services, which fits the definition of Complementary Information as it provides context and developments in the AI ecosystem without describing an incident or hazard.
Thumbnail Image

Cruise begins testing self-driving vehicles in Atlanta | TechCrunch

2023-08-07
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (self-driving vehicles) in a real-world environment. Although no incident or harm has been reported, the testing and eventual deployment of autonomous vehicles inherently carry plausible risks of harm to people or property due to potential AI system failures or errors. Therefore, this event qualifies as an AI Hazard because it could plausibly lead to an AI Incident in the future, but no harm has yet occurred according to the article.
Thumbnail Image

San Francisco drives tech; will it drive away robot taxis?

2023-08-08
The Jakarta Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) whose expanded use is under regulatory review. No actual harm or incident is reported, but the decision concerns the potential for future harm related to safety and operation of robot taxis. Therefore, this qualifies as an AI Hazard because the development and use of these AI systems could plausibly lead to harm, though no harm has yet occurred.
Thumbnail Image

San Francisco drives tech; will it drive away robot taxis?

2023-08-08
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous vehicles operated by Waymo and Cruise. It reports on actual incidents where these AI systems have been involved in collisions causing injuries and a fatality (dog), which constitutes harm to living beings. The involvement of AI in these harms is direct, as the autonomous driving AI systems control the vehicles. Therefore, this qualifies as an AI Incident due to realized harm linked to the use and operation of AI systems in autonomous vehicles.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan

2023-08-05
National Post
Why's our monitor labelling this an incident or hazard?
The robotaxis are AI systems performing autonomous driving tasks. Their malfunctions, even without major accidents, represent a failure in AI operation that could plausibly lead to injury or harm to people, thus fitting the definition of an AI Incident due to the direct link between AI malfunction and potential harm. The public resistance and regulatory caution further underscore the recognized risk. Therefore, this event qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan

2023-08-05
My Northwest
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the autonomous vehicle's AI driving system) that malfunctioned during operation, causing the vehicle to deviate from the intended route, fail to respond properly, and ultimately leave the passenger stranded in the middle of the street late at night. This constitutes a direct harm to the passenger's safety and wellbeing (harm to a person). Additionally, the article references numerous other incidents reported by city officials, indicating a pattern of safety hazards caused by these AI systems. Therefore, this qualifies as an AI Incident due to the realized harm and malfunction of the AI system in use.
Thumbnail Image

SF cab drivers protest expansion of Cruise and Waymo; city leaders question safety of robotaxis

2023-08-08
ABC7 News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (autonomous robotaxis) whose use has directly led to multiple incidents including unexpected stops and interference with emergency vehicles, which disrupt critical infrastructure (emergency response). The fire chief's testimony about 55 reported incidents of interference and the concerns about safety and reliability demonstrate realized harm. The protests and public hearing focus on these harms and the companies' responses. Hence, this is an AI Incident as the AI system's use has directly caused harm and disruption.
Thumbnail Image

San Francisco drives tech; will it drive away robot taxis? | Business

2023-08-07
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous vehicle technology used by Waymo and Cruise. It reports on collisions with injuries and erratic driving behavior that have directly led to safety concerns and potential harm to people and emergency services. These constitute realized harms linked to the AI systems' use and performance. Therefore, the event qualifies as an AI Incident due to direct involvement of AI systems causing or contributing to harm (injury risk, traffic disruption). The regulatory deliberations and public debate are contextual but do not negate the presence of incidents involving harm.
Thumbnail Image

Cruise, Waymo await decisions on self-driving service in San Francisco

2023-08-08
Automotive News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous vehicle technologies developed by Cruise and Waymo. The incidents described—vehicles blocking traffic for hours, interfering with emergency responders, and not responding appropriately to police requests—demonstrate that the AI systems' use has directly or indirectly led to disruption of critical infrastructure and public safety concerns. These disruptions are harms under the AI Incident definition (b) and (a) respectively. The regulatory delays and calls for better data and safety benchmarks further support that these harms are materialized and significant. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

San Francisco public safety agencies concerned about robotaxi expansion

2023-08-08
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The robotaxis are AI systems operating autonomously on public roads. Their unexpected stopping behavior has directly caused disruption to emergency services, as evidenced by 55 incidents reported by the San Francisco Fire Department where emergency vehicles were blocked. This disruption to critical infrastructure management (emergency response) qualifies as harm under the AI Incident criteria. The event involves the use of AI systems and their malfunction or operational issues leading to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan

2023-08-05
Spectrum News Bay News 9
Why's our monitor labelling this an incident or hazard?
The robotaxi is an AI system performing autonomous driving and ride-hailing functions. The described malfunction directly led to harm by stranding the passenger in an unsafe situation, which is injury or harm to a person. The article also references a significant number of other safety-related incidents involving these AI systems, reinforcing the presence of realized harm. Therefore, this event meets the definition of an AI Incident due to the direct harm caused by the AI system's malfunction during its use.
Thumbnail Image

Cops, Firefighters, and of Course Taxi Drivers Tee Off on Self-Driving Robotaxis Before Key Regulatory Vote

2023-08-08
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of self-driving robotaxis operated by Cruise and Waymo. These AI systems have caused 55 'Unusual Occurrence' incidents interfering with firefighters' duties, delaying emergency operations for up to half an hour, which constitutes disruption of critical infrastructure (emergency response). The involvement of AI is direct, as the robotaxis' autonomous operation and their failure to appropriately respond to emergency situations are the root causes of the harm. The event is not merely a potential risk but describes realized harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan

2023-08-05
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous robotaxi) whose malfunction directly caused harm to a person by stranding them in an unsafe situation. The AI system's failure to correctly navigate and respond to the passenger's destination is a clear example of malfunction during use. The article also references multiple other safety-related incidents involving similar AI systems, reinforcing the presence of realized harm. The harm includes potential physical risk and psychological distress, fitting the definition of injury or harm to a person. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

GM self-driving firm Cruise, rival to Tesla, is bringing robotaxis to a new city

2023-08-08
The Daily Courier
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving AI) in real-world testing and deployment. However, the article does not describe any actual harm or incidents caused by the AI system, only potential regulatory concerns and the company's safety claims. Since no harm has occurred but plausible future harm exists due to the nature of autonomous vehicles, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the expansion and testing of the AI system with inherent risks, not on responses or updates to past incidents.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan

2023-08-05
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The robotaxi Peaches is an AI system operating autonomously to provide ride-hailing services. The described malfunction—driving away from the drop-off point, confusion in navigation, and ultimately stranding the passenger—constitutes a failure of the AI system's operation. This malfunction directly led to harm in terms of passenger safety and potential distress, fulfilling the criteria for an AI Incident. The article also references a broader context of numerous safety-related incidents involving similar AI systems, reinforcing the classification. The involvement is in the use and malfunction of the AI system, and the harm is realized, not just potential.
Thumbnail Image

Recalling a wild ride with a robotaxi named Peaches as regulators mull San Francisco expansion plan | FOX 28 Spokane

2023-08-05
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (robotaxi) whose malfunction caused the vehicle to go off course, indicating a failure in the AI's operation. Although no direct harm occurred, the malfunction demonstrates a credible risk of harm to passengers or public safety. The article also mentions regulatory consideration of expansion plans, highlighting the potential for future incidents. Therefore, this is best classified as an AI Hazard rather than an AI Incident, as harm has not yet materialized but is plausible.
Thumbnail Image

Why self-driving taxis are facing their moment of truth in San Francisco

2023-08-08
The National
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in self-driving cars operated by Waymo and Cruise. It reports collisions involving injuries and operational issues causing traffic and emergency service disruptions, which are harms directly or indirectly linked to the AI systems' use. The presence of injuries and public safety concerns meets the criteria for harm to persons and disruption of critical infrastructure. Although there is debate about the safety record, the data from the San Francisco County Transportation Authority and observed erratic behaviors support the classification as an AI Incident. The article does not merely discuss potential risks or regulatory responses but documents realized harms and operational impacts caused by the AI systems.