Self-Driving Cars Disrupt Emergency Services and Raise Privacy Concerns in San Francisco

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo and Cruise self-driving cars in San Francisco have repeatedly impeded emergency vehicles, run over fire hoses, and malfunctioned, causing safety hazards and disruptions. Additionally, police have begun using robotaxi video footage for investigations, raising significant privacy and human rights concerns over the use and potential misuse of AI-collected data.[AI generated]

Why's our monitor labelling this an incident or hazard?

The vehicles involved are AI systems (self-driving cars) whose malfunction or inadequate performance in real-time decision-making has directly caused disruption to emergency response operations, which qualifies as harm to critical infrastructure management and operation. The incidents have occurred multiple times (66 times this year), indicating realized harm rather than potential harm. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction/use and harm to emergency services.[AI generated]
AI principles
SafetyRobustness & digital securityPrivacy & data governanceRespect of human rightsAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
WorkersGeneral public

Harm types
Physical (injury)Public interestHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisation

In other databases


Articles about this incident or hazard

Thumbnail Image

Waymo, Cruise vehicles have impeded emergency vehicle response 66 times this year: SFFD

2023-06-27
KRON4
Why's our monitor labelling this an incident or hazard?
The vehicles involved are AI systems (self-driving cars) whose malfunction or inadequate performance in real-time decision-making has directly caused disruption to emergency response operations, which qualifies as harm to critical infrastructure management and operation. The incidents have occurred multiple times (66 times this year), indicating realized harm rather than potential harm. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction/use and harm to emergency services.
Thumbnail Image

I wanted to love driverless taxis, but then my ride took a sinister turn

2023-06-30
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous vehicle AI) malfunctioning during use, directly causing a temporary safety hazard and disruption. The locked doors and unexpected stopping of the vehicle represent a failure of the AI system to operate safely and reliably, which is a direct harm to passengers and potentially to others in the environment. The article reports an actual incident with realized harm (temporary confinement and traffic disruption), not just a potential risk. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm (even if non-physical but safety-related and disruptive).
Thumbnail Image

Police Are Requesting Self-Driving Car Footage For Video Evidence

2023-06-29
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in self-driving cars that collect video data used by police to investigate crimes. The AI systems' operation (use) has directly led to law enforcement obtaining footage that influences criminal investigations, which affects individuals' privacy rights and raises human rights concerns. The harms include potential violations of privacy and surveillance overreach, which fall under violations of human rights and breach of obligations to protect fundamental rights. The event is not merely a potential risk but an ongoing reality with documented cases where footage was used or sought for legal purposes. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Police want robotaxi video footage to help solve crimes

2023-06-30
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (robotaxis with autonomous driving and continuous video recording capabilities) whose data is being used by police to investigate crimes. This use directly affects human rights, specifically privacy rights, and raises concerns about potential abuse and chilling effects on free speech. The police have already obtained warrants and used footage from these AI systems to solve crimes, indicating realized impact rather than just potential harm. Hence, this is an AI Incident involving violations or risks to human rights through the use of AI system data in law enforcement.
Thumbnail Image

Journalist documents wild ride inside Waymo self-driving car in SF

2023-06-29
ABC7 News
Why's our monitor labelling this an incident or hazard?
The self-driving cars are AI systems operating autonomously in public roads. The reported incidents of running over fire hoses, blocking fire trucks, and stalling for hours have directly disrupted critical infrastructure management and operations, fulfilling the criteria for harm (b). Therefore, these events qualify as AI Incidents due to the realized harm caused by the AI systems' malfunction or operational failures.
Thumbnail Image

Your Car Records Everything, Now Cops Want That Data

2023-07-01
CleanTechnica
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (robotaxis with cameras and sensors) and discusses the police's attempts to access data these systems collect. While there are clear privacy and human rights concerns, the article does not report an actual harm or violation that has occurred due to the AI system's use or malfunction. The concerns are about potential future misuse and lack of transparency, which could plausibly lead to harm but have not yet materialized as an incident. Therefore, this is best classified as an AI Hazard, reflecting the plausible risk of harm from police access to AI-collected data without proper safeguards.