Waymo Robotaxi AI Failures Lead to Safety Incidents and Regulatory Scrutiny in the US

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Waymo's autonomous vehicles in the US have faced multiple AI-related incidents, including driving into a police standoff and repeatedly failing to stop for school buses, violating traffic laws and endangering passengers and children. These failures have prompted federal investigations, software recalls, and public safety concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Waymo's autonomous driving software) whose malfunction or inadequate behavior (not stopping or slowing for school buses) has directly led to violations of traffic laws designed to protect children's safety, thus posing injury or harm risks to persons. The repeated incidents and regulatory investigations confirm that harm has occurred or is ongoing. The AI system's development and use are central to the event, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersChildren

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning

In other databases

Articles about this incident or hazard

Thumbnail Image

2025-12-08
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous driving software) whose malfunction or inadequate behavior (not stopping or slowing for school buses) has directly led to violations of traffic laws designed to protect children's safety, thus posing injury or harm risks to persons. The repeated incidents and regulatory investigations confirm that harm has occurred or is ongoing. The AI system's development and use are central to the event, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美自駕計程車Waymo進化 首度開放高速公路載客

2025-12-05
公共電視
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving AI) in active use (deployment of robotaxis on highways without human drivers). However, the article does not describe any injury, property damage, rights violations, or other harms caused by the AI system. The focus is on the expansion of service and safety protocols, implying potential risks but no actual incidents. Therefore, this qualifies as an AI Hazard because the autonomous driving AI could plausibly lead to harm in the future (e.g., accidents on highways), but no harm has yet occurred or been reported.
Thumbnail Image

人在美国,被Waymo送进警匪对峙现场

2025-12-04
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous robotaxi) whose use led to a direct risk of harm to a passenger by driving into a police standoff with armed suspects. The AI system's decision to run a red light and approach the suspect vehicle exposed the passenger to potential injury, fulfilling the criteria for harm to a person. The article also mentions other problematic behaviors of the AI system, indicating malfunction or suboptimal decision-making. Since the AI system's use directly led to a hazardous situation with realized risk, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo的自动驾驶出租车因违规超车停靠校车而接受调查

2025-12-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Waymo's autonomous driving system. The system's use has directly led to multiple violations of traffic laws designed to protect children's safety around school buses, which constitutes harm to persons and communities. The repeated nature of the incidents, despite software updates, and regulatory investigations confirm realized harm and risk. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or misuse in operation.
Thumbnail Image

Waymo发布自愿召回,将进行软件更新

2025-12-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system involved in the event. The system's use has directly led to multiple violations of traffic safety rules (improperly passing school buses), which could cause harm to people (e.g., children near school buses). Although no accident is reported, the repeated violations represent realized safety risks, thus constituting an AI Incident. The voluntary recall and software update are mitigation measures but do not negate the fact that the AI system's use has already led to safety-related harms or risks.
Thumbnail Image

Waymo决定召回无人驾驶汽车,以解决超校车问题

2025-12-08
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Waymo's autonomous driving software) whose malfunction (failure to slow or stop for school buses) has directly led to multiple violations of traffic laws designed to protect children's safety. The repeated illegal passes of school buses represent a direct risk of injury or harm to children, fulfilling the criteria for harm to persons. The involvement of regulatory authorities and the recall further confirm the seriousness of the incident. Therefore, this is classified as an AI Incident due to the direct link between the AI system's malfunction and realized or imminent harm.
Thumbnail Image

人在美国,被Waymo送进警匪对峙现场 - cnBeta.COM 移动版

2025-12-04
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves a Waymo autonomous vehicle, which is an AI system performing real-time driving decisions. The vehicle's AI system made a decision to cross a red light and enter a dangerous police standoff area, directly exposing the passenger to potential harm. This is a clear example of an AI system's use leading to direct harm or risk of harm to a person. The incident is not hypothetical or potential but has occurred, with the AI system's behavior being a pivotal factor. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Waymo自动驾驶汽车在得克萨斯州19次违规超越校车遭调查

2025-12-05
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Waymo's autonomous driving technology) whose use has directly led to multiple violations of traffic laws intended to protect children's safety, with documented incidents posing real risks to students. The regulatory investigation and the refusal to halt operations despite ongoing incidents confirm the AI system's role in causing or contributing to harm. Therefore, this qualifies as an AI Incident due to direct harm or risk to health and safety of persons (children).
Thumbnail Image

Waymo因自动驾驶出租车在校车周边的行驶行为受监管审查 并递交召回申请 - cnBeta.COM 移动版

2025-12-06
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving software) whose malfunction or improper behavior around school buses has led to regulatory scrutiny due to safety concerns. The AI system's failure to slow down or stop appropriately near school buses with children boarding or alighting poses a direct risk of injury or harm to people, fulfilling the criteria for an AI Incident. Although no injuries have yet occurred, the repeated illegal passing incidents and regulatory investigation indicate realized harm potential and safety risks. The company's voluntary recall and software updates are responses to this incident, but the core issue remains an AI system malfunction impacting public safety.
Thumbnail Image

Waymo认栽召回!搞不定校车难题 被抓到19次非法超车

2025-12-08
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving software) whose malfunction in detecting and responding to school bus stop signals has directly caused multiple illegal overtaking incidents. These incidents pose direct harm to the safety of children and violate strict traffic laws, fulfilling the criteria for harm to persons and violation of legal obligations. The recall and investigation confirm the AI system's role in causing these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

闯红灯加塞、碾猫撞狗 无人驾驶出租车闯祸不断

2025-12-09
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly describes autonomous vehicles controlled by AI systems causing or contributing to multiple harms: pedestrian injuries, traffic violations, and safety risks. The AI systems' malfunction or inadequate decision-making directly led to these harms, including a pedestrian being run over and multiple illegal overtaking incidents of school buses with children present. The regulatory authority's investigation and the software recall further confirm the AI system's role in causing harm. The presence of AI is clear (autonomous driving systems), and the harms are realized and significant. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo认栽召回!搞不定校车难题,被抓到19次非法超车

2025-12-08
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's Robotaxi) whose software malfunction or failure to correctly interpret school bus signals has directly caused illegal overtaking of school buses, a serious traffic violation with potential harm to children and the community. The involvement of the NHTSA investigation and the planned recall to fix the software issue confirm the AI system's role in causing harm. The repeated nature of the incidents and the regulatory response further support classification as an AI Incident rather than a hazard or complementary information. The harm is realized (illegal overtaking risking children's safety), not just potential, and the AI system's malfunction is the direct cause.
Thumbnail Image

经过校车时不减速、不停车,Waymo宣布将召回Robotaxi车辆

2025-12-08
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous driving software) whose malfunction (failure to properly slow or stop for school buses) has directly endangered the safety of children, a clear harm to persons. The software recall and regulatory investigation confirm the AI system's role in causing these incidents. The harm is realized or ongoing, not merely potential, as multiple violations and risks to students have been documented. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

智驾巨头召回!

2025-12-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous driving software) whose malfunction (software logic defect) has directly led to multiple illegal and dangerous traffic violations involving school buses, posing injury risks to children. The recall and regulatory investigation confirm the harm and safety concerns are materialized, not just potential. Therefore, this is an AI Incident due to direct harm to health and safety caused by the AI system's malfunction during its use.
Thumbnail Image

经过校车时不减速、不停车,Waymo 宣布将召回 Robotaxi 车辆

2025-12-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving software) whose malfunction (failure to properly slow down or stop near school buses) has directly led to safety violations and risks to children's physical safety. The recall and software updates are responses to these incidents. Since the AI system's malfunction has caused or could cause injury or harm to persons, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

拉开与特斯拉Robotaxi差距!Waymo被爆周订单超45万,未满八个月近翻倍

2025-12-09
新浪财经
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems (autonomous driving and Robotaxi services), it does not describe any realized harm, malfunction, or misuse leading to injury, rights violations, or other harms. It also does not indicate any plausible future harm or risk stemming from these AI systems. Instead, it provides complementary information about the AI ecosystem, including market data, safety performance, and strategic outlooks. Therefore, it fits the category of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

摩根士丹利预言2026自动驾驶爆发!美国33城落地,Waymo特斯拉双寡头,优步Lyft面临"蚕食"

2025-12-10
新浪财经
Why's our monitor labelling this an incident or hazard?
The article centers on a forecast and market analysis of autonomous driving AI deployment and its economic impact, without reporting any actual harm or malfunction. While it involves AI systems and their use, the content is predictive and strategic rather than describing an incident or hazard with realized or imminent harm. Hence, it fits the category of Complementary Information, providing context and insight into AI ecosystem developments and potential future impacts, but not constituting an AI Incident or AI Hazard.
Thumbnail Image

Waymo再度宣布其機器人計程車軟體召回 修正未禮讓停等校車問題

2025-12-09
udn科技玩家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving software) whose malfunction (failure to stop for stopped school buses) has directly led to a safety risk that could cause injury to children, a clear harm to health and safety. The recall is a corrective action addressing this malfunction. The involvement of the AI system in causing this harm is explicit and direct, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

马斯克称Waymo无胜算,特斯拉年底取消安全监督员

2025-12-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as both Tesla and Waymo operate autonomous driving AI systems. The planned removal of safety monitors indicates a change in the use of the AI system that could plausibly lead to harm (e.g., accidents) if the AI system fails, thus constituting a potential AI Hazard. Since no actual harm or incident is reported, and the article mainly discusses competitive positioning and future operational plans, this is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

出行量同比增3倍!Robotaxi龙头数据喜人 明年开启...

2025-12-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI used in Robotaxi services. The data on ride volumes and safety claims indicate active use of AI systems impacting real-world transportation. However, there is no mention of any harm, malfunction, or legal violation caused by these AI systems. The content focuses on operational achievements, expansion plans, and industry competition, which enrich understanding of the AI ecosystem and its societal implications. This fits the definition of Complementary Information, as it supports ongoing assessment of AI impacts without reporting new harm or credible risk of harm.
Thumbnail Image

女子打自动驾驶出租车 后备箱里竟然藏着一光头男

2025-12-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous taxi) and its malfunction in detecting a hidden person in the trunk, which is a direct failure of the AI system's safety monitoring. This failure led to a safety risk and public concern, constituting harm to persons and communities. The incident is not merely a potential risk but a realized failure with direct consequences, including police involvement and public controversy. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo无人驾驶出租车付费出行超1400万次 特斯拉正加入竞争

2025-12-11
caixin.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving taxis) in active use, but there is no indication of any injury, rights violation, disruption, or other harm caused or plausibly caused by the AI system. The article is a factual update on the scale and growth of Waymo's service and competitive landscape, without reporting any incident or hazard. Therefore, it is best classified as Complementary Information, providing context and updates about AI deployment and market competition without describing harm or risk of harm.
Thumbnail Image

马斯克:Waymo根本没机会赢特斯拉 技术路线分歧明显

2025-12-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article centers on the strategic and technological competition between two AI-driven autonomous vehicle companies. While AI systems are clearly involved, the content does not describe any event where these systems have caused or are causing harm, nor does it indicate a plausible risk of harm from their current operations. It is an informative piece about the AI ecosystem and competitive landscape, without reporting an incident or hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI system deployment and industry competition.
Thumbnail Image

美国女子为女儿打车在后备箱发现男人 无人驾驶车安全受质疑

2025-12-11
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI system (Waymo's autonomous taxi) was in use and its malfunction in detecting an unauthorized person in the trunk directly led to a safety incident. The failure of the AI system to detect the man hiding in the trunk represents a malfunction that could have caused harm to passengers or others. The event involves realized harm in terms of safety risk and undermines trust in the AI system's safety features. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and the safety risk posed.
Thumbnail Image

女子打自动驾驶出租车 后备箱里竟然藏着一光头男

2025-12-11
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous taxi) and its malfunction in safety monitoring, which directly led to a security incident where a man was hidden in the trunk undetected. This failure contradicts the company's safety claims and poses a direct risk to passenger safety and public trust. The harm here is the breach of safety and potential physical or psychological harm to passengers, fulfilling the criteria for an AI Incident.
Thumbnail Image

Waymo召回数千辆自动驾驶汽车

2025-12-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Waymo's autonomous driving system) and describes its malfunction or failure to comply with safety regulations, which has directly led to unsafe behavior near school buses. This poses a direct risk of injury or harm to people, especially children, thus meeting the criteria for an AI Incident due to harm to health and safety. The recall is a response to these incidents, confirming the realized harm or risk.
Thumbnail Image

女子打自动驾驶出租车 后备箱里竟然藏着一光头男 - cnBeta.COM 移动版

2025-12-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous taxi with AI-powered cameras and monitoring systems) whose malfunction (failure to detect a hidden person in the trunk) directly led to a safety and security incident requiring police involvement. This constitutes harm to the person (passenger safety risk) and a failure of the AI system's intended safety function. Therefore, this qualifies as an AI Incident.
Thumbnail Image

马斯克霸气回怼Waymo:连对抗特斯拉的机会都没有_手机网易网

2025-12-12
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI in Robotaxis) and their use, but there is no indication of any realized harm or incident resulting from these AI systems. The discussion about Tesla's upcoming removal of safety drivers implies a plausible future risk, but no harm or malfunction has yet occurred. The article mainly provides comparative data, company statements, and expert predictions, which serve as complementary information to understand the AI ecosystem and its development. Therefore, this is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Waymo recalls, updates software for over 3000 vehicles, US regulator says

2025-12-11
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's automated driving system) whose software malfunction caused vehicles to behave unsafely by passing stopped school buses, increasing crash risk. This directly relates to harm to health and safety of people, fulfilling the criteria for an AI Incident. The recall and software update are responses to the incident but do not change the classification. The regulator's investigation and the reported incidents confirm that harm or risk of harm materialized due to the AI system's malfunction.
Thumbnail Image

Waymo recalls over 3,000 vehicles in the US: Here's why - The Times of India

2025-12-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The self-driving car software is an AI system responsible for autonomous vehicle behavior. The malfunction caused the vehicles to fail to stop for school buses, which is a direct safety hazard and has already resulted in reported incidents. This constitutes an AI Incident because the AI system's malfunction directly led to harm or risk of harm to people. The recall and fix are responses to this incident.
Thumbnail Image

Waymo recalls and fixes over 3,000 vehicles over software issue, NHTSA says

2025-12-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The automated driving system is an AI system making real-time decisions. The software malfunction caused vehicles to drive past stopped school buses, which is a direct safety hazard that could lead to injury or harm to children and others. The recall and software update indicate the issue was recognized and addressed, but the event itself involves a malfunction of an AI system that directly increased risk of harm. Therefore, this is an AI Incident as the AI system's malfunction directly led to a significant safety hazard.
Thumbnail Image

Woman discovers man hiding in Waymo trunk

2025-12-12
Mashable
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo autonomous vehicle) and concerns passenger safety, but no injury, violation of rights, or other harm occurred. The AI system did not malfunction or cause harm directly; rather, the situation arose from a human action (man hiding in the trunk after it was left open). The event plausibly could lead to harm in the future if such security lapses are exploited or repeated, thus constituting an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated, as it directly involves an AI system and a safety concern.
Thumbnail Image

Waymo recalls 3,067 robotaxis after school bus safety issues

2025-12-12
Washington Times
Why's our monitor labelling this an incident or hazard?
Waymo's robotaxis use AI systems for autonomous driving. The vehicles' failure to stop for school buses as legally required constitutes a malfunction of the AI system's operation, directly leading to safety risks and citations. The harm is to the safety and well-being of children boarding or alighting school buses, which is injury or harm to persons. The recall and software update are remedial actions but do not negate the fact that harm has occurred. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

"This is too fishy": Mom orders a Waymo for her daughter -- and finds a man in the trunk

2025-12-12
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously to provide transportation. The incident involves the use of this AI system, where a man was found hiding in the trunk, creating a direct safety risk to the passenger. The AI system's failure to detect or prevent this unauthorized presence constitutes a malfunction or failure in use that directly led to a harm scenario (risk of injury or harm to a person). The event is not merely a potential hazard but an actual incident involving the AI system's operation and safety. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Waymo Boasts "Exponential Scaling"

2025-12-13
CleanTechnica
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) actively operating in multiple cities, which is relevant to AI deployment. However, the article does not describe any injury, rights violations, property damage, or other harms caused or plausibly caused by the AI systems. It also does not highlight any credible risk of harm or near misses. The focus is on operational progress and regulatory status, which fits the definition of Complementary Information as it provides context and updates on AI system deployment and governance without reporting an incident or hazard.
Thumbnail Image

Triple Waymo-robotaxi stand-off blocks San Fran street

2025-12-12
Driving
Why's our monitor labelling this an incident or hazard?
The vehicles involved are autonomous robotaxis, which clearly use AI systems for navigation and decision-making. The street blockage caused by these driverless vehicles constitutes a disruption of public infrastructure management, and the failure to stop for school buses is a direct safety hazard that could lead to injury or harm to children. The recall and software update indicate recognition of a malfunction in the AI system's behavior. Since harm or risk of harm has occurred or is imminent due to the AI system's malfunction or failure, this qualifies as an AI Incident.
Thumbnail Image

Waymo Recalls, Updates Software for Over 3000 Vehicles, US Regulator Says

2025-12-11
Insurance Journal
Why's our monitor labelling this an incident or hazard?
The event describes a malfunction in an AI system (automated driving system) that directly increased the risk of harm by causing vehicles to pass stopped school buses illegally, which is a safety hazard with potential for injury. The recall and software update are responses to this AI Incident. The AI system's malfunction and use led to a direct safety risk, fulfilling the criteria for an AI Incident under harm to health and safety. The event is not merely a potential hazard or complementary information, as the risk manifested in illegal behavior and prompted regulatory action and recall.
Thumbnail Image

Waymo Issues Recall Over Software Allowing For Illegal Maneuvers

2025-12-12
The Truth About Cars
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles use AI systems for navigation and decision-making. The reported illegal passing of stopped school buses is a direct result of a software defect in the AI system controlling these vehicles. This defect has led to multiple confirmed incidents, including citations and regulatory investigations, indicating realized harm or risk to public safety. The recall and regulatory response confirm the AI system's malfunction and its role in causing these harms. Hence, this event meets the criteria for an AI Incident due to direct harm or risk to health and safety caused by the AI system's malfunction during its use.
Thumbnail Image

Waymo Recalls Over 3,000 Self-Driving Vehicles Due to Software Glitch

2025-12-12
IVCPOST
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Waymo's 5th-generation automated driving system controlling autonomous vehicles. The software glitch is a malfunction of this AI system that directly caused the vehicles to behave unsafely by passing stopped school buses with flashing red lights and stop arms extended, which is a violation of traffic laws and a direct safety risk to people, especially children. This constitutes harm to the health and safety of persons (harm category a). The recall and regulatory investigation confirm the seriousness of the incident. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system malfunction and realized harm risks.
Thumbnail Image

After these Waymo controversies, we'll stick with Uber

2025-12-13
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Waymo's autonomous vehicles. The incidents described include direct safety risks and actual harm or potential harm to people (e.g., a stranger trapped in a hot trunk, illegal passing of school buses, aggressive driving causing unpredictability, and entering a crime scene). These are harms to health and safety (a), and disruption of public safety operations (b). The AI system's malfunction or use is a contributing factor in these harms. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Woman discovers man hiding in trunk of driverless Waymo taxi when ordering ride

2025-12-13
AZfamily.com
Why's our monitor labelling this an incident or hazard?
The AI system (Waymo driverless taxi) is involved as the platform where the incident occurred, but the harm is due to a human hiding in the trunk, not due to AI malfunction or misuse. There is no direct or indirect harm caused by the AI system's development, use, or malfunction. The event is a security breach and the company's response is a governance measure. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Joseph Sabino Mistick: Will Pittsburgh welcome the changes it faces?

2025-12-14
TribLIVE
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (Waymo's autonomous vehicles) and its deployment in Pittsburgh, but it does not describe any harm or incident caused by the AI system. It also does not describe a plausible hazard scenario. The focus is on societal and economic implications and local government issues, which are not directly related to AI harms. Therefore, this is Complementary Information as it provides context and updates about AI deployment and local responses without reporting an AI Incident or Hazard.
Thumbnail Image

Police Investigating Weird Man Found in Waymo by Passenger

2025-12-14
Futurism
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as Waymo's autonomous vehicles rely on AI for navigation and safety monitoring. The incident involves the AI system's failure to detect a human hiding in the trunk, which is a malfunction or oversight in the AI's operation. This failure directly led to a safety and security risk, constituting harm or potential harm to persons. The police investigation and Waymo's recall of vehicles for software issues further support the classification as an AI Incident. The harm is realized as the man was found hidden in the vehicle, posing a direct safety threat. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo's Software Patch to Not Run Down Children Getting Off School Buses Isn't Working, School Claims

2025-12-14
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in Waymo's self-driving cars malfunctioning by not stopping for school buses as legally required, which is a direct safety hazard to children disembarking from buses. Multiple incidents have been recorded, including one where the vehicle passed disembarking students, indicating realized harm or imminent risk of harm. The company's refusal to restrict operations despite these risks further underscores the direct link between the AI system's behavior and potential injury. Therefore, this qualifies as an AI Incident due to direct harm or risk to health and safety caused by the AI system's malfunction and use.
Thumbnail Image

Woman finds stranger in Waymo trunk during ride in LA

2025-12-14
ABC7
Why's our monitor labelling this an incident or hazard?
The incident involves a Waymo autonomous vehicle, which is an AI system. The stranger being trapped in the trunk and unable to exit indicates a malfunction or failure in the AI system's safety or operational protocols. The event directly relates to the use of the AI system and has led to a safety risk and harm to the person involved, even if no injury was reported. The company's acknowledgment and commitment to address the issue further support the classification as an AI Incident rather than a hazard or complementary information. Hence, the event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Gridlock Guy: Should Waymo be held to a higher standard than human drivers?

2025-12-14
The Atlanta Journal-Constitution
Why's our monitor labelling this an incident or hazard?
Waymo's autonomous vehicles use AI systems for navigation and decision-making. The reported behavior of running school bus stop signs is a malfunction or failure of the AI system to comply with traffic laws designed to protect children, leading to direct safety risks. The harm is to the health and safety of children and the community, fulfilling the criteria for an AI Incident. The article documents actual occurrences of this harm, not just potential risks, and the involvement of the AI system is explicit and central to the issue.
Thumbnail Image

Waymo's Self-Driving Cars: Philadelphia Expansion and the Future of Transportation (2025)

2025-12-15
Manigi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's self-driving cars) and its planned use in Philadelphia. While there are references to past incidents and concerns about safety and complex scenarios, no direct or indirect harm has occurred in this expansion phase. The article primarily discusses the potential for future operations, regulatory hurdles, and societal responses, which aligns with a plausible future risk but not an actual incident. Therefore, this qualifies as an AI Hazard, as the deployment of fully autonomous vehicles could plausibly lead to incidents involving harm, but no such incident is described as having occurred yet.
Thumbnail Image

Horrified mom finds man hiding in trunk of driverless taxi

2025-12-15
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a driverless taxi, which is an AI system performing autonomous navigation and passenger transport. The man's presence in the trunk represents a failure or misuse related to the AI system's operation and safety protocols. The incident caused direct concern and potential harm to the passengers' safety and well-being, fulfilling the criteria for harm to persons or communities. Although no physical injury occurred, the safety risk and psychological harm are significant. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elle appelle un taxi autonome pour aller à la maternité... et accouche seule sur la banquette arrière

2025-12-13
Ouest France
Why's our monitor labelling this an incident or hazard?
The autonomous taxi is an AI system explicitly mentioned and involved in the event. However, the AI system did not malfunction or cause harm; the birth occurred naturally and safely, with no injury reported. The event does not describe a failure or misuse of the AI system leading to harm, nor does it present a credible risk of future harm. The incident is unusual and symbolic but does not meet the criteria for an AI Incident or AI Hazard. The article also discusses broader operational challenges and public criticism of the service, which adds context but does not constitute a new incident or hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

" Son bébé est né sur le siège arrière " : une femme accouche dans un taxi autonome à San Francisco

2025-12-11
Le Parisien
Why's our monitor labelling this an incident or hazard?
The autonomous taxi is an AI system involved in the event. The AI system's passenger assistance detected unusual activity and alerted emergency services, which is a positive use of AI. No harm or injury resulted from the AI system's operation; the birth was a natural event occurring during the ride. There is no indication of malfunction or misuse of the AI system causing harm. The event does not describe a hazard or incident but rather an unusual occurrence with a positive outcome and an example of AI system monitoring. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Taxi automatisé: une femme accouche seule dans une voiture sans conducteur

2025-12-11
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The autonomous taxi is an AI system involved in the event. The woman giving birth alone in the vehicle is a direct consequence of using the AI system for transportation during labor. However, since no injury or harm occurred and the AI system did not malfunction to cause harm, this is not an AI Incident. The event is a real-world use case with a positive outcome, not a plausible future harm scenario, so it is not an AI Hazard. The article also discusses previous incidents and the company's statements, which provide additional context and societal response. Therefore, the event is best categorized as Complementary Information, as it enhances understanding of AI system use and public interaction without reporting new harm or risk.
Thumbnail Image

Trois voitures de Waymo se retrouvent bloquées dans une impasse !

2025-12-12
Auto Plus
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Waymo autonomous vehicles) whose AI driving algorithms malfunctioned in a complex urban environment, leading to a minor collision and vehicles being stuck. The AI system's malfunction directly caused harm to property (minor damage to a vehicle) and disruption of vehicle operation. Although no physical injury occurred, the incident fits the definition of an AI Incident due to direct harm caused by AI system malfunction during use. The presence of human intervention to resolve the situation confirms the AI system's failure to handle the scenario autonomously. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Les États-Unis enquêtent sur les infractions des véhicules autonomes Waymo

2025-12-11
Leblogauto.com
Why's our monitor labelling this an incident or hazard?
The event involves autonomous vehicles (AI systems) that have repeatedly failed to comply with legal traffic rules protecting school children, leading to dangerous situations. The AI system's malfunction or insufficient adaptation to traffic laws has directly endangered children's safety, fulfilling the criteria for harm to persons. The ongoing investigation and regulatory scrutiny confirm the seriousness of the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo rappelle ses voitures autonomes après des incidents avec des bus scolaires

2025-12-12
Leblogauto.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Waymo's autonomous driving software. The system's malfunction caused the vehicles to illegally pass stopped school buses, creating direct safety risks to children and pedestrians, which is harm to persons and communities. The recall and investigation confirm the AI system's role in these incidents. Therefore, this qualifies as an AI Incident because the AI system's malfunction and use have directly led to realized harm and legal violations related to safety.