Waymo Robotaxis Complete 1 Million Miles with No Fatalities, Begin Driverless Testing in Los Angeles

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo's autonomous taxis have driven over one million miles without a human driver, recording no fatalities or serious injuries, though two minor collisions occurred, neither caused by the AI. The company has now launched fully driverless taxi testing in Los Angeles, raising potential future safety concerns but no reported harm so far.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the deployment and testing of AI-driven autonomous vehicles without human safety operators, which clearly involves AI systems. Although the testing is limited to employees and is part of a controlled rollout, the use of AI in autonomous driving carries inherent risks that could plausibly lead to harm (e.g., accidents or injuries). Since no harm has yet occurred or been reported, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the testing and regulatory permits rather than any realized harm or incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rights

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareIT infrastructure and hosting

Harm types
Physical (injury)Physical (death)Economic/PropertyReputationalPublic interestPsychologicalHuman or fundamental rights

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality controlCitizen/customer service

AI system task:
Recognition/object detectionForecasting/predictionEvent/anomaly detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Waymo to test driverless rides with employees in Los Angeles

2023-02-28
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and testing of AI-driven autonomous vehicles without human safety operators, which clearly involves AI systems. Although the testing is limited to employees and is part of a controlled rollout, the use of AI in autonomous driving carries inherent risks that could plausibly lead to harm (e.g., accidents or injuries). Since no harm has yet occurred or been reported, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the testing and regulatory permits rather than any realized harm or incident.
Thumbnail Image

Robotaxi tech improves but can they make money?

2023-03-03
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems controlling driverless robotaxis, which are operating in public urban environments. There are mentions of operational malfunctions (e.g., doors not closing properly, unexpected stopping) that have raised concerns among city officials, indicating plausible safety risks. However, no actual harm such as injury, property damage, or rights violations has been reported. The AI systems' use and malfunction could plausibly lead to harm, fulfilling the criteria for an AI Hazard. The article also discusses economic and operational challenges but does not report any realized harm or incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Google's Waymo Prepared To Trial Robotaxi Services with Employees in Los Angeles

2023-02-28
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—Waymo's autonomous driving technology—in a real-world setting where the AI system will control vehicles without human safety operators. While no harm has been reported yet, the deployment of fully autonomous vehicles in public urban areas carries plausible risks of harm to people or property if the AI system malfunctions or makes incorrect decisions. Therefore, this event represents a credible potential for harm stemming from the AI system's use, fitting the definition of an AI Hazard rather than an Incident, as no actual harm has occurred yet.
Thumbnail Image

Google's Waymo Prepared To Trial Robotaxi Services with Employees in Los Angeles

2023-02-28
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—Waymo's autonomous driving technology—to provide robotaxi services without human safety operators. While the trial is limited to employees and no harm is reported, the deployment of fully autonomous vehicles in public spaces carries plausible risks of harm such as accidents or injuries. Therefore, this event represents a credible potential for harm stemming from the AI system's use, qualifying it as an AI Hazard rather than an Incident since no actual harm has been reported yet.
Thumbnail Image

Waymo is starting driverless taxi tests in Los Angeles | Engadget

2023-02-27
engadget
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving AI) in real-world testing, which could plausibly lead to harm if malfunctions or failures occur. However, the article describes a controlled testing phase with safety measures and no actual harm or incident reported. Therefore, this qualifies as an AI Hazard, reflecting the plausible future risk of harm from autonomous vehicle operation during testing.
Thumbnail Image

Waymo robotaxis have now driven 1 million miles autonomously with no recorded injuries

2023-03-02
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) whose use has directly led to a significant milestone of safe operation without injury or death. Although minor incidents occurred, they were not caused by the AI system's malfunction or error. Since no harm has occurred and the AI system's use is demonstrated as safe, this does not qualify as an AI Incident. There is no indication of plausible future harm or hazard either, as the data supports safety. The article primarily reports on operational data and safety performance, which is informative but does not describe harm or risk. Therefore, this is Complementary Information providing context and updates on AI system deployment and safety.
Thumbnail Image

Everyone says same thing about autonomous startup ditching human safety drivers

2023-02-27
The US Sun
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Waymo's autonomous driving technology) in a real-world setting without human safety drivers, which could plausibly lead to harm such as injury to people or disruption of critical infrastructure if the AI system malfunctions or fails to respond appropriately. Since no actual harm or incident is reported, but the potential for harm is credible given the nature of fully autonomous vehicle operation in urban areas, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Waymo joins Cruise in 1M test mile club, expands driverless rides to Los Angeles

2023-03-01
Electrek
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (autonomous driving AI) in active use and reports on its safety record and expansion. No actual harm or injury has occurred, and the incidents were minor with no injuries. The article focuses on progress and safety data, not on harm or risk warnings. While future hazards are possible with expanded deployment, the article does not emphasize plausible future harm or risks. Thus, it is not an AI Incident or AI Hazard. Instead, it provides complementary information about the AI system's deployment and safety performance, fitting the definition of Complementary Information.
Thumbnail Image

Waymo starts autonomous testing in LA with no human driver

2023-02-27
9to5Google
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving technology) in real-world testing without human drivers, which is a direct use of AI. Although no harm or incident has been reported so far, the nature of autonomous vehicle operation in a complex urban environment like Los Angeles presents credible risks of future harm such as accidents or injuries. Therefore, this situation qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no actual harm has yet occurred or been reported.
Thumbnail Image

After 1 Million Miles, Waymo's Autonomous Cars Might've Found The True Road Danger - SlashGear

2023-02-28
SlashGear
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in autonomous driving and their extensive testing, which confirms AI system involvement. However, it states that no injuries have been reported, indicating no realized harm. The mention of a limitation in accounting for certain road dangers suggests potential challenges but does not describe any specific event or circumstance where harm occurred or is imminent. Therefore, the content is best classified as Complementary Information, providing context and progress updates on AI deployment in autonomous vehicles without reporting an incident or hazard.
Thumbnail Image

Waymo Robotaxis Hit 1 Million Miles With No Fatalities

2023-03-02
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's fully autonomous robotaxis) in active use. However, no harm or injury caused by the AI system itself has occurred; the minor incidents were caused by human drivers, not the AI. The report focuses on the safety performance and expansion plans, without any indication of realized or potential harm from the AI system. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about the AI system's operational safety and deployment progress, which helps contextualize the AI ecosystem and informs stakeholders about real-world AI use and safety outcomes.
Thumbnail Image

Waymo robo taxis rack up a million miles with no fatalities

2023-03-02
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) actively operating on public roads and directly involved in collisions that meet official reporting criteria. Although no fatalities occurred, the collisions had a measurable risk of injury, and the AI system's inability to avoid one collision indicates a malfunction or limitation in its operation. The article provides detailed analysis of these incidents, showing direct involvement of the AI system in harm-related events. Therefore, this qualifies as an AI Incident under the definition of an event where the use or malfunction of an AI system has directly or indirectly led to harm (injury risk). The article does not merely discuss potential future harm or general AI developments, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the events described.
Thumbnail Image

Waymo launches its driverless taxi pilot in Los Angeles

2023-02-28
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of fully autonomous vehicles controlled by AI systems (autonomous driving software and algorithms). The event is a pilot program launch, with no reported accidents, injuries, or rights violations. The AI system's involvement is in its use for driverless taxi service. While the deployment of autonomous vehicles inherently carries plausible risks of harm (accidents, traffic disruption), the article does not report any actual harm or incident. Therefore, the event is best classified as an AI Hazard, reflecting the credible potential for future harm from the AI system's use in public roads, but no realized harm yet.
Thumbnail Image

Waymo在洛杉矶开始纯无人驾驶出租车测试

2023-02-28
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving AI) in real-world testing. However, there is no indication that any harm has occurred or that there is an imminent risk of harm. The testing is controlled, limited, and follows regulatory approval. Therefore, this event does not describe an AI Incident or AI Hazard but rather an update on ongoing AI system deployment and testing, which fits the definition of Complementary Information.
Thumbnail Image

智通财经APP获悉,据报道,Alphabet(GOOGL.US)旗下的自动驾驶技术部门Waymo当地时间周三表示,该公司裁掉了137名员工,这是其今年以来的第二轮裁员。Waymo的联合首席执行官在一封内部邮件中告诉员工,加上最新一轮的……

2023-03-02
证券之星
Why's our monitor labelling this an incident or hazard?
While Waymo is an AI system developer (autonomous driving technology), the article focuses on layoffs and business challenges rather than any harm caused or plausible harm from AI system malfunction or misuse. There is no mention of accidents, injuries, rights violations, or other harms linked to the AI systems. The layoffs themselves are a business decision and do not constitute an AI Incident or AI Hazard. The article provides contextual information about the AI ecosystem and industry trends, which fits the definition of Complementary Information.
Thumbnail Image

新鲜早科技丨Waymo开启今年第二轮裁员;爱奇艺CEO回应限制投屏;苏州立法禁止大数据“杀熟”

2023-03-02
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article mentions AI systems implicitly in contexts such as autonomous driving (Waymo) and big data analytics (Suzhou legislation), but no direct or indirect harm caused by AI systems is reported. The Suzhou legislation is a preventive measure addressing potential harms from big data practices, which can involve AI, but no incident or hazard is described as occurring or imminent. Other items are corporate financial reports or product updates without harm or risk focus. Therefore, the article is best classified as Complementary Information, providing context and governance responses related to AI without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Waymo将在洛杉矶让员工测试无人驾驶汽车 没有人类安全操作员 - cnBeta.COM 移动版

2023-02-28
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems (autonomous driving AI) in robotaxi services without human safety operators, which is a direct use of AI. Although no incident or harm has been reported, the nature of fully autonomous vehicle testing without human safety drivers plausibly could lead to injury, property damage, or other harms if the AI system fails. Hence, it fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident involving harm.