Waymo Robotaxi Impedes Emergency Response and Is Shot at During Austin Shootings

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Austin, Texas, a Waymo self-driving taxi blocked emergency vehicles during a fatal mass shooting, briefly delaying ambulance access. In a separate incident, another Waymo robotaxi was shot at while carrying a passenger, causing vehicle damage but no injuries. Both incidents highlight safety and reliability concerns for autonomous vehicles in critical situations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Driverless cars block emergency responders from Austin terror attack

2026-03-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Video shows self-driving Waymo car blocking emergency vehicles...

2026-03-02
New York Post
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system operating autonomously without a driver. Its malfunction or inability to move out of the way blocked emergency vehicles, directly disrupting emergency response operations, which is critical infrastructure. Although the delay was brief and emergency personnel arrived quickly, the AI system's role in obstructing emergency vehicles during a deadly shooting incident constitutes a direct link to harm (disruption of critical infrastructure). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Waymo execs testify at San Francisco City Hall about their vehicles' actions during power outage

2026-03-03
CBS News
Why's our monitor labelling this an incident or hazard?
The Waymo autonomous vehicles are AI systems that rely on communication and human intervention to operate safely. During the power outage, the loss of 5G communication caused the AI systems to fail to operate properly, resulting in vehicles stopping in dangerous locations and blocking emergency access. This disruption to critical infrastructure management and potential safety hazards meet the criteria for an AI Incident, as the AI system's malfunction directly and indirectly led to harm and operational disruption. The event is not merely a potential hazard or complementary information but a realized incident involving AI harm.
Thumbnail Image

Waymo Car Blocks Ambulance Responding to Austin Mass Shooting, on Video

2026-03-02
TMZ
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is a self-driving taxi, clearly an AI system. Its stopping in the middle of the street blocked an ambulance and emergency personnel, directly causing a delay in emergency response to a mass shooting, which is harm to health and disruption of critical infrastructure (emergency services). The incident is documented with video evidence and media reports, confirming the AI system's role in the harm. Although paramedics arrived quickly, the blockage caused a measurable disruption. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Self-Driving Waymo EV Blocked First Responders at Austin Mass Shooting Scene

2026-03-02
Breitbart
Why's our monitor labelling this an incident or hazard?
The autonomous Waymo vehicle, an AI system, obstructed emergency vehicles responding to a mass shooting, causing a delay. This is a direct consequence of the AI system's malfunction or failure to navigate properly in a critical urban environment. The delay in emergency response plausibly exacerbated harm to victims of the shooting, fulfilling the criteria for an AI Incident as the AI system's malfunction directly led to harm (delay in critical emergency services).
Thumbnail Image

Waymo blocks ambulance responding to Austin mass shooting

2026-03-02
Axios
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly led to a harmful outcome: blocking an ambulance responding to an emergency. This obstruction could delay critical medical response, posing a risk of injury or harm to people. The incident is a direct consequence of the AI system's operation in a real-world emergency context, fulfilling the criteria for an AI Incident. Although the ultimate outcome for the ambulance is unclear, the blocking itself constitutes realized harm or at least a significant risk thereof, given the emergency context. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WATCH: Driverless Waymo Taxi Blocks Emergency Response To Deadly Austin Shooting

2026-03-02
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system performing autonomous driving tasks. Its behavior—blocking the road and emergency vehicle—directly caused a delay in emergency response, which is a disruption of critical infrastructure management. The incident is a realized harm, not just a potential risk, as the ambulance was physically blocked and had to reroute. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm through disruption of emergency services.
Thumbnail Image

Waymo Robotaxi Blocks First Responders in Austin Mass Shooting, Raising Fresh Questions About Safety

2026-03-02
autoevolution
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously on public roads. Its malfunction or inability to promptly yield to emergency responders during a mass shooting incident directly led to a blockage that delayed emergency vehicles, which can be considered harm to people (injury or risk thereof). The incident is not merely a potential hazard but a realized event where the AI system's use caused disruption and risk. Hence, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Austin 6th Street shooting: Waymo caught on video blocking responding ambulance

2026-03-02
FOX 4 News Dallas-Fort Worth
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system involved in the event. Its actions temporarily blocked an ambulance, which is critical infrastructure for emergency medical response. Although no injury or harm resulted from this blockage, the AI system's malfunction or decision-making could plausibly lead to harm in similar future scenarios. The event does not describe actual harm caused by the AI system but indicates a credible risk of disruption to emergency services. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Waymo goes viral after blocking EMS during deadly Austin shooting

2026-03-02
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a self-driving car (Waymo) that, during an emergency response to a deadly shooting, blocked an EMS ambulance by getting stuck and failing to move promptly. This is a clear example of an AI system's malfunction or failure to act appropriately in a critical situation, leading to disruption of emergency services and potential harm to people needing urgent care. The AI system's involvement is direct and causally linked to the harm (delay in EMS response). Hence, it meets the criteria for an AI Incident.
Thumbnail Image

WATCH: Waymo robotaxi blocks ambulance during Austin mass shooting

2026-03-02
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its action of blocking an ambulance during a critical emergency response is a direct consequence of its autonomous navigation decisions. This caused a delay in emergency services, which is a disruption of critical infrastructure. The incident is not hypothetical or potential but has occurred, with direct harm resulting from the AI system's behavior. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Waymo's performance during SF power outages scrutinized

2026-03-03
KRON4
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems whose malfunction during the power outage caused them to stop in intersections, blocking traffic and delaying emergency services. This is a direct harm to the management and operation of critical infrastructure (traffic and emergency response). The event involves the use and malfunction of AI systems leading to realized harm, thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Next level dystopian": Waymo robotaxi blocks first responders reacting to Austin mass shooting

2026-03-02
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously to pick up passengers. Its malfunction or failure to appropriately yield to emergency vehicles directly caused a delay in ambulance response to a mass shooting, which is harm to health (a). The incident is documented with video evidence and official confirmation, showing the AI system's involvement in causing the obstruction. The harm is realized, not just potential, as emergency response was impeded. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Supervisors question Waymo over problems during widespread San Francisco outage

2026-03-03
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
Waymo's autonomous vehicles are AI systems that make real-time navigation decisions. Their freezing during the power outage caused traffic blockages, which is a disruption of community infrastructure and public order, constituting harm. The event describes a realized harm caused by the AI system's malfunction, making it an AI Incident.
Thumbnail Image

What would it take for Arlington to get Waymo robotaxis? | ARLnow.com

2026-03-02
ARLnow.com - Arlington, Va. Local News
Why's our monitor labelling this an incident or hazard?
The article describes a scenario where AI systems (Waymo's autonomous vehicles) could be deployed in a new region, but this deployment has not yet happened. There is no mention of any harm or incident caused by the AI system. The main content is about the potential for future AI system use and the legal framework needed to enable it. Therefore, this qualifies as an AI Hazard because the development and potential use of AI systems could plausibly lead to incidents or harms in the future, but no harm has yet occurred.
Thumbnail Image

Waymo fails another emergency test during Austin mass shooting

2026-03-02
MyrtleBeachOnline
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous vehicle technology—and details its malfunction during emergency scenarios. These malfunctions directly caused harm by delaying emergency vehicles and potentially endangering lives, fulfilling the criteria for injury or harm to people (harm category a). The AI system's failure to act appropriately in these critical situations is a direct cause of the harm described. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Waymo Safety Concerns During Emergencies Being Discussed In SF Hearing

2026-03-02
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems whose malfunction during the blackout directly caused traffic congestion and blocked emergency vehicles, posing a risk to public safety and emergency response. This fits the definition of an AI Incident because the AI system's malfunction led to harm to communities and disruption of critical infrastructure management (emergency vehicle access). The hearing and public concerns further confirm the significance of the harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

In Charlotte, S.C., a Bumpy Debut for Waymo Robotaxis

2026-03-02
Government Technology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle technology) in active use, and a collision occurred involving the AI system's vehicle. Although the AI system was under manual control and was not at fault, the incident is a real-world event involving AI system use and a collision, which is a form of harm (potential injury or property damage). The AI system's involvement in the event and the public safety implications meet the criteria for an AI Incident. The incident is not merely a potential risk (hazard) or a complementary information update, but a concrete event involving AI system use and harm (even if minor and caused by a third party).
Thumbnail Image

Waymo Vehicle Blocks Traffic As First Responders Rush to Texas Bar: 'Ram It Out of the Way'

2026-03-02
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (self-driving car) that malfunctioned or failed to respond properly, causing it to block emergency vehicles. This directly disrupted the management and operation of critical infrastructure (emergency response), which fits the definition of harm (b). The incident is clearly described as having occurred, with the AI system's role pivotal in causing the disruption. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo Robotaxi Shot at in Austin: How a Late-Night Shooting Is Testing the Limits of Autonomous Vehicle Safety and Public Trust

2026-03-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) in active use, carrying a passenger when it was attacked by gunfire. The shooting caused damage to the vehicle and endangered the passenger, fulfilling the criteria of harm to a person and property. The AI system's role is pivotal as the autonomous nature of the vehicle is central to the incident and the challenges it presents (no human driver as deterrent). The incident is not merely a potential risk but a realized harm, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Robotaxi Blocks EMS

2026-03-02
710 KURV - The Valley's News/Talk Station
Why's our monitor labelling this an incident or hazard?
The self-driving taxi is an AI system whose malfunction or failure to appropriately respond to emergency vehicle presence caused a delay in EMS operations. Even though the delay was brief and did not significantly impact patient care, the AI system's involvement directly led to disruption of critical infrastructure management and operation, fitting the definition of an AI Incident.
Thumbnail Image

A Waymo Robotaxi Blocked An Ambulance During An Active Shooter Incident In Austin - The Autopian

2026-03-02
The Autopian
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its malfunction—confusion leading to blocking an ambulance during an emergency—demonstrates a failure in AI behavior that could plausibly lead to harm, such as delaying emergency response and causing injury or death. Although no harm occurred here, the incident highlights a credible risk of future harm if such AI behavior is not corrected. Therefore, it fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to an AI Incident in the future. It is not an AI Incident because no actual harm or injury resulted from this event. It is not Complementary Information or Unrelated because the event directly involves an AI system and its malfunction with potential safety implications.
Thumbnail Image

Footage Reveals Self-Driving Waymo Car Impeding Emergency Response During Fatal Austin Shooting - Internewscast Journal

2026-03-02
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the self-driving Waymo taxi) whose malfunction (being stuck and blocking emergency vehicles) indirectly caused harm by impeding emergency response during a fatal shooting incident. Although emergency services arrived quickly, the obstruction caused by the AI system's failure to clear the path represents a disruption of critical infrastructure management and operation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo Self-Driving Car Blocks Ambulance At Fatal Austin Shooting - Officer Jumps In To Move It

2026-03-02
Dallas Express
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system operating autonomously in a complex urban environment. Its decision to execute a U-turn and briefly yield to an ambulance caused a temporary blockage of emergency response vehicles. This constitutes an AI system's use leading indirectly to a disruption of critical infrastructure (emergency response). However, since no injury or harm resulted from this blockage and authorities confirmed no impact on patient outcomes, the event does not meet the threshold of an AI Incident causing realized harm. Instead, it represents a situation where the AI system's use led to a plausible risk of harm or disruption, which was mitigated promptly. Therefore, this event is best classified as an AI Hazard, reflecting a credible risk of harm due to AI system behavior in a critical context, but without actual harm occurring.
Thumbnail Image

Waymo autonomous car blocks ambulance crew responding to deadly Austin mass shooting

2026-03-04
Fox News
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system operating autonomously. Its stopping behavior caused a temporary blockage of ambulances responding to a mass shooting, which is a disruption of critical infrastructure (emergency medical services). The incident involved the AI system's use and malfunction (slow reaction and stopping sideways). Although the harm was limited and quickly mitigated, the AI system's role was pivotal in causing the disruption. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo vehicles under investigation for passing stopped school buses By Investing.com

2026-03-03
Investing.com
Why's our monitor labelling this an incident or hazard?
Waymo's self-driving vehicles are AI systems operating autonomously. Their illegal passing of stopped school buses with activated lights directly endangers children's safety, fulfilling the criterion of injury or harm to persons. The recall and investigation indicate that these incidents have occurred and are significant. Hence, this qualifies as an AI Incident due to realized harm linked to AI system malfunction or misuse.
Thumbnail Image

NTSB Says Waymo Robotaxis Illegally Passed Stopped School Buses in New Incidents

2026-03-03
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Waymo's self-driving vehicles) whose malfunction and use have directly led to illegal and unsafe behavior (passing stopped school buses with active signals), which poses a safety risk to children and the community. The incidents have resulted in investigations by safety authorities and a recall of vehicles, indicating recognized harm. The collision with a child further confirms realized harm. Thus, the event meets the criteria for an AI Incident as the AI system's malfunction and use have directly caused harm and legal violations.
Thumbnail Image

Waymo could expand to Virginia under new driverless car bill

2026-03-03
Axios
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (fully autonomous vehicles) and their potential commercial use, but no harm or incident has occurred yet. The article outlines a legislative proposal that would enable such use, implying a plausible future scenario where AI systems could operate driverless taxis. However, since no harm or malfunction is reported, and the event centers on regulatory and market expansion context, it fits best as Complementary Information, providing context on AI ecosystem developments and governance responses rather than describing an incident or hazard.
Thumbnail Image

SF emergency director says cops forced to be 'roadside assistance' for Waymos

2026-03-03
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
Waymo's autonomous vehicles are AI systems that navigate and operate without human drivers. During the PG&E blackout, these vehicles stalled and blocked intersections, causing traffic snarls and requiring first responders to intervene physically. The emergency director explicitly described this as a major public safety issue and an unacceptable reliance on first responders to act as roadside assistance. The AI system's malfunction under blackout conditions directly disrupted emergency management and public safety operations, fulfilling the criteria for an AI Incident under harm category (b) - disruption of critical infrastructure management and operation. The event involves the use and malfunction of AI systems leading to realized harm, not just potential harm or complementary information.
Thumbnail Image

Google's Waymo blamed for blocking ambulance at Austin's mass shooting sight - The Times of India

2026-03-03
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly led to a disruption of critical infrastructure management—emergency medical response—by blocking an ambulance. This constitutes harm under category (b) as it disrupted emergency services. Although no severe injury or death was reported due to the obstruction, the interference with ambulance access during a mass shooting is a significant harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has materialized and is directly linked to the AI system's use.
Thumbnail Image

NTSB says Waymo robotaxis illegally passed stopped school buses in new incidents

2026-03-03
CNA
Why's our monitor labelling this an incident or hazard?
Waymo's self-driving vehicles are AI systems operating autonomously. The reported illegal passing of stopped school buses with activated lights is a direct violation of traffic laws designed to protect children, representing a safety hazard and potential harm to persons. The incidents have already occurred multiple times, with investigations ongoing, indicating realized harm or risk. The AI system's malfunction or failure to comply with legal requirements is central to these events, fulfilling the criteria for an AI Incident.
Thumbnail Image

Waymo Says It Has Nothing to Say After Its Self-Driving Taxi Blocked an Ambulance Responding to a Mass Shooting

2026-03-03
Futurism
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously on public roads. Its failure to promptly clear the way for an ambulance constitutes a malfunction or improper use of the AI system, directly disrupting emergency response operations, which qualifies as disruption of critical infrastructure management and operation. The incident caused a delay in ambulance access to a mass shooting scene, which is a direct harm to public health and safety. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Why the Denver launch of Waymo robotaxis isn't delayed by the lack of snow

2026-03-03
The Denver Post
Why's our monitor labelling this an incident or hazard?
The article primarily provides information about the deployment and operational readiness of Waymo's AI robotaxis, including their adaptation to winter conditions and regulatory oversight. It does not describe any realized harm or malfunction caused by the AI system, nor does it highlight a credible imminent risk of harm. The mention of a past minor incident is contextual and not the main subject. Therefore, this article fits best as Complementary Information, as it offers supporting context about AI system deployment, regulatory environment, and safety considerations without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Waymo in MN: Lawmakers, drivers to speak out against proposal on autonomous vehicles

2026-03-03
FOX 9 Minneapolis-St. Paul
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicles) and discusses potential risks related to their operation, but it does not describe any realized harm or incident caused by these AI systems. The concerns are about plausible future harms (to jobs and public safety) if the vehicles operate without regulation. Therefore, this qualifies as an AI Hazard because it highlights credible risks that could plausibly lead to harm, but no harm has yet occurred. It is not Complementary Information because the main focus is on the potential risks and legislative debate, not on updates or responses to a past incident.
Thumbnail Image

Remote agent to blame for Waymo robotaxi illegally passing an Austin school bus

2026-03-03
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly describes Waymo's autonomous vehicles, which are AI systems, illegally passing stopped school buses with flashing stop arms, a clear violation of traffic laws intended to protect children. The AI system's decision-making, influenced by remote assistance agents, directly led to these illegal maneuvers, posing safety risks to children and the community. The reported incident of a Waymo vehicle striking a child, causing injury, further confirms direct harm caused by the AI system. These facts meet the criteria for an AI Incident, as the AI system's use and malfunction have directly led to harm to persons and communities and violations of legal obligations. The involvement of human error in remote assistance does not negate the AI system's role in the harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

NTSB publishes preliminary report on Waymo safety investigations

2026-03-03
Spectrum News Bay News 9
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's automated driving system) whose malfunction or operational failure has directly led to safety violations involving school buses unloading children, which poses a risk of injury or harm to persons (students). The NTSB investigation and the voluntary software recall indicate that the AI system's use has caused or contributed to these incidents. Therefore, this qualifies as an AI Incident due to direct harm or risk to health and safety resulting from the AI system's use.
Thumbnail Image

Waymo is tweaking its self-driving car tech to navigate in heavy snowfall

2026-03-03
PhillyVoice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's self-driving cars) and its development and use, but there is no reported harm or incident resulting from the AI system's operation. The article discusses improvements and preparations for future fully autonomous deployment, which is a normal part of AI system development and deployment. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI system development and deployment without describing any incident or hazard.
Thumbnail Image

Waymo robotaxi fails to stop for school bus in Austin Texas

2026-03-04
The Robot Report
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous driving system) whose operation and safety depend on both AI and a remote human operator. The AI system stopped as required but relied on the human operator's incorrect decision to proceed, which led to an illegal and unsafe action. This directly violates traffic laws designed to protect children and creates a safety hazard, fulfilling the criteria for harm to people. The incident has already happened and is under investigation, confirming realized harm or at least a significant safety violation. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo Self-Driving Vehicles Under Scrutiny for School Bus Incidents | Business

2026-03-03
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (Waymo's self-driving vehicles) whose malfunction or improper behavior has directly led to harm or risk of harm to people, including a collision with a child and multiple illegal passes of stopped school buses. This constitutes injury or harm to persons, fulfilling the criteria for an AI Incident. The investigations and recalls further confirm the AI system's role in causing these harms.
Thumbnail Image

As Waymo seeks to start service in Minnesota, some state lawmakers are pushing back

2026-03-03
https://www.keyc.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Waymo's autonomous vehicles) and discusses the potential for harm related to their deployment, such as safety risks and labor market impacts. However, since the vehicles are still in testing and the service has not yet launched, no direct or indirect harm has materialized. The lawmakers' push for regulation and the debate over oversight indicate plausible future risks but do not describe an actual incident. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harm from the use of AI systems in autonomous vehicles, not an AI Incident or Complementary Information.
Thumbnail Image

Waymo autonomous vehicle obstructs ambulance amid Austin mass shooting incident - STL.News

2026-03-04
STL.News
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo's autonomous vehicle) whose operational behavior directly caused a delay in emergency medical services during a mass casualty event. This delay is a direct harm to human health and safety, fulfilling the criteria for an AI Incident. The AI system's failure to appropriately respond to emergency vehicle signals or sirens led to a tangible negative outcome. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why Waymo cars stalled during San Francisco's December blackout

2026-03-03
sfstandard.com
Why's our monitor labelling this an incident or hazard?
Waymo cars are AI systems (autonomous vehicles) that rely on AI for navigation and operation. The blackout caused these AI systems to malfunction or stall, leading to traffic disruption, which is a form of harm to community infrastructure and public order. The company's acknowledgment of the issue and efforts to improve alerts and staffing confirm the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during the blackout.
Thumbnail Image

Waymos reportedly continuing to pass stopped school buses after earlier recall over same issue

2026-03-03
Sherwood News
Why's our monitor labelling this an incident or hazard?
The Waymo vehicles are autonomous AI systems responsible for driving decisions. The incident shows the AI system incorrectly interpreting or acting upon the situation, leading to illegal passing of stopped school buses with stop arms extended, which is a safety violation and could cause injury. The involvement of a remote assistance agent indicates reliance on human input, but the AI system ultimately resumed travel improperly. Since the AI system's malfunction directly led to a traffic safety violation with potential for harm, this qualifies as an AI Incident, even though no injury occurred yet, because the risk is immediate and the behavior has already happened.
Thumbnail Image

Waymo car blocked ambulance trying to get to scene of Austin mass shooting

2026-03-03
The Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly led to a disruption in emergency response operations, which qualifies as harm under the disruption of critical infrastructure or emergency services. Although the delay did not result in reported injury or death, the interference with emergency services is a significant harm. Therefore, this qualifies as an AI Incident due to the direct involvement of the AI system causing harm through malfunction or operational failure in a critical context.
Thumbnail Image

Waymo Self-Driving Car Impedes Emergency Response in Austin Mass Shooting: An Urgent Analysis - Internewscast Journal

2026-03-04
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system whose autonomous navigation and decision-making led to it stopping in a manner that impeded ambulance access during a mass shooting emergency. This is a direct malfunction or failure in the AI system's operation affecting emergency response, which is critical infrastructure. The incident caused a temporary obstruction, which is a disruption of critical infrastructure management, fulfilling the criteria for an AI Incident. The presence of the AI system is explicit, the harm is disruption to emergency services, and the event is not merely a potential hazard or complementary information but a realized incident.
Thumbnail Image

Future of Waymo, self-driving cars fuels debate at Minnesota Capitol

2026-03-05
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their potential impacts, but no direct or indirect harm has occurred yet. The article centers on legislative and societal responses to the anticipated deployment of AI systems, aiming to establish regulations and safety guidelines. Since no harm or plausible imminent harm is reported, and the main focus is on governance and public discourse, this qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Waymo pushes for Maryland leaders to pass bills regulating autonomous vehicles

2026-03-04
CBS News
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (autonomous vehicle technology) and discusses its use and potential malfunction (blocking emergency vehicles). Although a past incident is mentioned, no new harm is reported in Maryland. The article's main focus is on advocating for regulation to prevent possible future harms and improve safety. This fits the definition of an AI Hazard, as the autonomous vehicle AI system's use could plausibly lead to harm (e.g., blocking emergency access), but no direct or indirect harm is currently reported in the described event.
Thumbnail Image

NTSB says Waymo robotaxis illegally passed stopped school buses in new incidents

2026-03-04
Reuters
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (Waymo's self-driving vehicles) whose malfunction or software issues have led to illegal traffic behavior (passing stopped school buses) and a collision with a child, which is harm to persons and a violation of traffic laws. The AI system's use and malfunction are directly linked to these harms, fulfilling the criteria for an AI Incident. The investigations and recalls further confirm the seriousness and realized harm of these incidents.
Thumbnail Image

Waymo Faces Mounting Scrutiny as NTSB Examines School Bus Incident

2026-03-04
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of Waymo's automated driving system (ADS), an AI system, in multiple safety-related events. The illegal passing of stopped school buses is a direct violation of traffic laws designed to protect children, representing a risk of harm to groups of people. The ambulance blockage incident directly impeded emergency medical services during a critical event, which could have caused harm to injured individuals. The stalling during a power outage shows a malfunction of the AI system affecting vehicle operation. These events collectively demonstrate that the AI system's use and malfunction have directly or indirectly led to potential or actual harm, meeting the criteria for an AI Incident.
Thumbnail Image

Shooting chaos puts spotlight on Waymo's remote human operators

2026-03-04
Axios
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) whose use directly led to a disruption of emergency response (critical infrastructure operation) by blocking an ambulance during a mass casualty event. The AI system's malfunction or limitation in handling the emergency situation caused a delay in ambulance access, which constitutes indirect harm to public health and safety. Therefore, this qualifies as an AI Incident due to the realized harm and operational failure linked to the AI system's use.
Thumbnail Image

Waymo Blocks Ambulance From Reaching Mass Shooting - Jalopnik

2026-03-04
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Waymo autonomous vehicle is an AI system controlling the car's navigation. Its failure to move or yield to an emergency vehicle blocked the ambulance's path to a mass shooting scene where multiple people were injured and killed. This directly impacted emergency response, posing a risk of injury or harm to people relying on timely medical aid. The AI system's malfunction or inappropriate behavior in this critical situation meets the criteria for an AI Incident due to direct harm or risk to health and safety.
Thumbnail Image

NTSB: Human error prompted Waymo to pass stopped school bus in Austin

2026-03-04
KXAN.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Waymo's autonomous vehicle) whose operation depends on both AI and human remote assistance. The incorrect human response to the AI system's query led to the vehicle illegally passing a stopped school bus, which is a safety hazard and a violation of traffic laws designed to protect children. This constitutes direct harm or risk of harm to persons (children) and communities, fulfilling the criteria for an AI Incident. The involvement of the AI system's use and malfunction (due to human error in remote assistance) directly led to the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Waymo preps self-driving vehicles to handle winter weather in Philadelphia

2026-03-04
NBC10 Philadelphia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Waymo's self-driving cars) in their development and testing phase to handle challenging weather conditions. However, there is no indication of any harm, malfunction, or violation caused by these AI systems. The article primarily provides an update on the company's efforts and progress, which fits the definition of Complementary Information as it enhances understanding of AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

Minnesota lawmakers look at regulating autonomous vehicles like Waymo

2026-03-04
INFORUM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their use, but no direct or indirect harm has occurred yet. The article primarily covers legislative and societal responses to the potential risks and benefits of AVs, which aligns with the definition of Complementary Information. It does not describe an AI Incident (harm realized) or an AI Hazard (plausible future harm alone) as the main focus is on regulation and debate rather than a specific harmful event or credible imminent risk. Therefore, the classification is Complementary Information.
Thumbnail Image

Waymo Still Has A Problem Stopping For School Buses

2026-03-05
CleanTechnica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous driving software—and describes its malfunction in failing to stop for school buses, which is a critical safety requirement. This malfunction has been documented multiple times and is under investigation by a national safety authority, indicating recognized harm or risk to public safety. The failure to stop for school buses can directly lead to injury or harm to children and other road users, fulfilling the criteria for an AI Incident. The article also references similar issues with Tesla's Full Self Driving, reinforcing the systemic nature of the problem. Therefore, this event is best classified as an AI Incident due to the direct or indirect harm caused by the AI system's malfunction.
Thumbnail Image

Waymo's driverless cars spark safety worries across the country

2026-03-04
WSPA 7News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the involvement of Waymo's AI-driven autonomous vehicles in events that caused or could have caused harm: blocking emergency vehicles, illegal traffic behavior, and a collision with a child. These are direct consequences of the AI system's operation and thus meet the criteria for AI Incidents. The presence of the AI system is clear, the harms are realized or near-realized, and the incidents stem from the AI system's use and performance. Although some harms were mitigated or minor, the incidents still represent safety-related harms linked to AI system use.
Thumbnail Image

Waymo wants its self-driving cars in Baltimore, Annapolis lawmakers need to give approval first

2026-03-04
WMAR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's self-driving cars) whose use is currently in preparation but not yet fully operational in Baltimore. There is no report of any harm or malfunction caused by the AI system. The article highlights potential safety concerns and job impacts, which are plausible future risks if the system is deployed. Therefore, this constitutes an AI Hazard, as the development and intended use of the AI system could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

Lots of questions, but little pushback at Senate hearing on bill to allow driverless cars

2026-03-05
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their potential use, but no harm has yet occurred. The discussion is about legislation to allow their operation under certain standards, with arguments for and against. Since no incident or harm has materialized, but the deployment of AI systems could plausibly lead to future harms or benefits, this qualifies as an AI Hazard. However, the article mainly reports on the legislative hearing and perspectives rather than a direct risk or warning of imminent harm. Given the focus on potential future impacts and regulatory framework, the classification is AI Hazard rather than AI Incident or Complementary Information.
Thumbnail Image

Waymo seeks state approval to operate driverless cars in Baltimore

2026-03-05
WBAL
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving technology) whose use is currently proposed but not yet fully authorized in Maryland. The article does not describe any realized harm or incidents caused by the AI system but discusses the potential risks and societal concerns related to its deployment. Therefore, it fits the definition of an AI Hazard, as the development and intended use of the AI system could plausibly lead to harm in the future, such as traffic accidents or job displacement. There is no indication of an actual incident or complementary information about past events, so AI Hazard is the appropriate classification.
Thumbnail Image

Waymo Blocks Responders In Austin Mass Shooting | Silicon UK

2026-03-04
Silicon UK
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (autonomous vehicle) whose malfunction or unpredictable behavior directly caused harm by blocking emergency responders during a mass shooting, which could delay critical medical aid and law enforcement response, constituting harm to health and safety. The repeated incidents of passing stopped school buses also represent safety violations that could harm children, triggering federal investigations and recalls. These facts meet the criteria for an AI Incident because the AI system's use and malfunction have directly led to harm or risk of harm in real situations.
Thumbnail Image

Federal Scrutiny and Local Resistance: Waymo Navigates a Turbulent Expansion

2026-03-05
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (Waymo Driver software) in multiple incidents causing collisions and erratic driving behavior. These incidents have led to federal investigations and software recalls, indicating malfunction or failure of the AI system. The harms include property damage and potential risk to public safety, fulfilling the criteria for harm under the AI Incident definition. The presence of federal investigations and regulatory responses further supports the classification as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but details actual events where the AI system's malfunction or use has led to harm or near-harm situations.
Thumbnail Image

Waymo Autonomous Car Blocks Ambulance Crew Responding to Deadly Austin Mass Shooting

2026-03-04
WOWO 1190 AM | 107.5 FM
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (autonomous driving). Its action of stopping and blocking the ambulance was a malfunction or misjudgment in its operation during a critical emergency. Although no injury or delay causing harm was reported, the AI system's behavior could plausibly lead to harm in similar future situations by impeding emergency response. The event does not describe realized harm caused by the AI system but does show a credible risk of harm, fitting the definition of an AI Hazard rather than an AI Incident. The company's and EMS's responses further support this classification as a learning opportunity and risk mitigation rather than a realized incident.
Thumbnail Image

NTSB says Waymo robotaxis illegally passed stopped school buses in new incidents

2026-03-04
District Administration
Why's our monitor labelling this an incident or hazard?
Waymo's robotaxi is an AI system operating autonomously. The incident describes the AI system's malfunction or misjudgment in passing stopped school buses with active signals, which is a direct safety hazard and could cause injury or harm to persons (children boarding the bus). The AI system's decision to pass the bus after receiving incorrect remote operator input shows a failure in the AI-human interaction and system operation. This constitutes an AI Incident because the AI system's use has directly led to a safety hazard with potential or actual harm to people, and the NTSB is investigating these events as incidents of concern.
Thumbnail Image

Labor advocates try to put the brakes on unregulated, self-driving Waymo cars

2026-03-04
Route Fifty
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, namely Waymo's autonomous vehicles, which use AI for self-driving capabilities. However, no actual harm or incident has occurred; the cars are currently operated by human drivers for mapping and data collection. The concerns raised by labor advocates and lawmakers relate to potential future harms such as job displacement and safety risks if self-driving cars are allowed to operate without regulation. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents if unregulated deployment occurs, but no direct or indirect harm has yet materialized.
Thumbnail Image

Bill to smooth arrival of Waymo, other self-driving vehicles hits early Capitol roadblock

2026-03-05
Star Tribune
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicles) and their use, but it does not describe any direct or indirect harm resulting from their deployment or malfunction. The legal uncertainty and legislative proposals represent a potential future risk environment but do not constitute a plausible immediate hazard or incident. Therefore, this is best classified as Complementary Information, providing context on governance and societal responses to AI deployment rather than reporting an AI Incident or Hazard.
Thumbnail Image

Rogue Waymo causing traffic jams in Culver City

2026-03-05
FOX 11 Los Angeles
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems as they perform complex real-time navigation and decision-making. Their repeated stalling and causing gridlock is a malfunction of the AI system leading to disruption of traffic flow, which is a form of harm to critical infrastructure management. The event describes realized harm (traffic jams and safety hazards) caused by the AI system's malfunction, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Former Waymo test driver says self-driving vehicles are dangerous, should not be tested in Nashville

2026-03-05
WSMV Nashville
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Waymo's autonomous driving technology—whose malfunction and unsafe behavior have directly led to near collisions and dangerous situations on public roads. The former test driver reports multiple incidents where the AI system's decisions caused or nearly caused harm, including driving the wrong way in a lane and sudden acceleration. These are direct safety hazards linked to the AI system's operation and malfunction. The presence of realized near-harm and the direct role of the AI system in these events meet the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Driverless car company shows off Jaguar in Annapolis amid legislation to operate in Baltimore

2026-03-05
WBAL
Why's our monitor labelling this an incident or hazard?
The Waymo driverless Jaguar is an AI system (autonomous vehicle AI) whose development and use are central to the event. While the company emphasizes safety and no harm has been reported, the union's opposition and concerns about safety and job displacement indicate potential future harms. Since no actual harm or incident has occurred yet, but plausible risks exist, this event qualifies as an AI Hazard rather than an AI Incident. It is more than just general AI news or complementary information because it focuses on the potential for harm from the AI system's deployment and legislative approval process.
Thumbnail Image

What will robotaxis mean for those who drive for a living?

2026-03-05
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous vehicles (robotaxis) and discusses their use and malfunction (accidents, navigation errors). While it mentions harms such as accidents and privacy concerns, these are described as ongoing or past issues rather than a new, specific incident causing harm. It also discusses potential future legal and regulatory developments and economic impacts on drivers. Since the article's main focus is on summarizing existing data, concerns, and broader implications rather than reporting a new AI Incident or AI Hazard, it fits the definition of Complementary Information.
Thumbnail Image

Lots of questions, but little pushback on Maryland bill to allow driverless cars

2026-03-05
Route Fifty
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their potential use, but no actual harm or incident has occurred yet. The discussion centers on the plausible future impacts of deploying driverless cars, including safety improvements and job market transitions. Since the article is about legislative consideration and potential future use without any realized harm or malfunction, it fits the definition of an AI Hazard, as the deployment of autonomous vehicles could plausibly lead to incidents or harms in the future, but none are reported as having occurred at this time.
Thumbnail Image

Waymo Gets Shy As Scaling Creates More Incidents; Plus Key New Details

2026-03-06
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly describes several incidents where Waymo's AI system in autonomous vehicles caused or contributed to harm or safety risks: hitting a child pedestrian (minor injuries), vehicles stalling and blocking traffic during a power outage, and multiple illegal passes of stopped school buses. These are direct harms or violations of traffic laws linked to the AI system's operation or remote operator errors. The involvement of regulatory investigations (NTSB, NHTSA) and the discussion of transparency and safety improvements further confirm the AI system's role in these harms. Thus, the event meets the criteria for an AI Incident as the AI system's use and malfunction have directly or indirectly led to harm to persons and disruption of public safety.
Thumbnail Image

Former Waymo test driver says self-driving vehicles are dangerous, should not be tested in Nashville

2026-03-06
https://www.wvlt.tv
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Waymo's autonomous driving AI controlling vehicles on public roads. The reported near collisions and dangerous maneuvers are direct consequences of the AI system's malfunction or unsafe behavior during use. The harm is to the health and safety of people (potential injury or death), fulfilling the criteria for an AI Incident. The driver's testimony and documented incidents demonstrate realized harm or imminent risk, not just potential future harm. Waymo's response does not negate the occurrence of these incidents. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viral Austin video renews concerns over self‑driving cars in emergencies

2026-03-07
CBS News
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo's autonomous vehicle) whose use led to a temporary blockage of emergency responders, which is a direct interference with critical infrastructure operation (emergency medical response). While no actual harm occurred this time, the event demonstrates a plausible risk of harm in future emergencies if the AI system fails to yield properly. The National Transportation Safety Board's formal investigation further supports the classification as an AI Hazard. Since no actual injury or harm has been reported, and the focus is on potential safety risks and concerns, the event does not meet the threshold for an AI Incident but clearly represents an AI Hazard.
Thumbnail Image

Viral video shows Waymo robotaxi blocking ambulance during Austin shooting

2026-03-07
CBS News
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. Its failure to promptly clear the path for an ambulance during a mass shooting incident directly impeded emergency services, potentially causing harm to victims needing urgent care. This constitutes an AI Incident because the AI system's malfunction or inadequate response directly led to disruption in critical infrastructure operation (emergency medical response) and potential harm to people.
Thumbnail Image

Waymo expands robotaxi push in Pittsburgh

2026-03-06
Axios
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system—Waymo's autonomous driving technology. However, it only describes the expansion of testing and regulatory certification, with no mention of any harm or malfunction caused by the AI system. Since no harm has occurred and the fully driverless service is not yet operational, this does not qualify as an AI Incident. It also does not present a clear and credible risk of harm at this stage, as the company is still in the testing and approval phase and plans to notify the public before launching. Therefore, it is not an AI Hazard either. The article is best classified as Complementary Information because it provides context and updates on the deployment and regulatory progress of an AI system without reporting any harm or imminent risk.
Thumbnail Image

Waymo blocks ambulance after mass shooting, raising safety concerns

2026-03-06
The Tennessean
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Waymo autonomous vehicles, which use AI systems for navigation and decision-making. The incidents described involve the AI system's malfunction or failure to act appropriately, directly leading to obstruction of emergency vehicles and unsafe driving behavior. These outcomes constitute harm to health and safety (a), and disruption of critical infrastructure management (b). Since harm has occurred or is ongoing, and the AI system's malfunction is a contributing factor, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

As Waymo expands to Minnesota, legislators mull regulations for self-driving cars

2026-03-06
Bring Me The News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their potential impact, but it describes ongoing legislative and regulatory discussions rather than any realized harm or malfunction. Since no AI Incident has occurred, but there is a plausible risk of harm if autonomous vehicles operate without sufficient regulation, this qualifies as an AI Hazard. The article centers on the potential for future harm and the need for safeguards, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

What would robotaxis mean for the Chicago region?

2026-03-06
Daily Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous vehicles by Waymo) and discusses their development, testing, and potential deployment. However, it does not report any new harm or direct incident caused by these AI systems. The past incidents mentioned serve as background context rather than current events. The focus is on regulatory pathways, safety debates, and public opinion, which aligns with the definition of Complementary Information. There is no indication of an AI Incident (harm realized) or AI Hazard (plausible future harm) as the deployment is still in planning and testing phases with no imminent or specific risk described.
Thumbnail Image

The impact of self-driving cars on the rideshare industry

2026-03-07
KGUN
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear, as autonomous vehicles operated by companies like Waymo and Zoox are described. However, the article does not report any realized harm such as injury, rights violations, or property damage caused by these AI systems. The economic impact on drivers is noted but is a market effect rather than a direct or indirect harm caused by AI malfunction or misuse. There is no indication of plausible future harm beyond economic competition. The article mainly provides background and context on AI's growing role in ridesharing and its social implications, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Minnesota lawmakers weigh self-driving car regulations

2026-03-06
St. Cloud Times
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicles operated by Waymo, which are currently being tested and mapped in Minnesota. However, no actual harm or incident has occurred yet; the concerns are about plausible future harms such as safety risks and job displacement. The legislative efforts and advocacy represent responses to these potential risks. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm if unregulated deployment proceeds, but no direct or indirect harm has yet materialized.
Thumbnail Image

New details released in Waymo vehicle crash with 9-year-old near Santa Monica school

2026-03-07
KTLA 5
Why's our monitor labelling this an incident or hazard?
The event describes a collision caused by a Level 4 automated driving system (Waymo's AI system) that directly resulted in injury to a child, which is a harm to health. The AI system was in control of the vehicle at the time, and the incident is under federal investigation. The harm is realized, not just potential, and the AI system's use is central to the event. Hence, it meets the definition of an AI Incident.
Thumbnail Image

Emergency Responders Say They're Now Unpaid "Roadside Assistance" for Confused Waymos

2026-03-07
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Waymo's autonomous robotaxis) whose malfunction during a power outage and other incidents caused significant traffic disruption and impeded emergency responders, including blocking an ambulance. This disruption to emergency operations constitutes harm to critical infrastructure management and public safety, fitting the definition of an AI Incident. The AI system's malfunction and the resulting need for emergency responders to intervene directly caused these harms.
Thumbnail Image

Waymo Advances Autonomous Vehicle Safety with Emergency Protocols

2026-03-08
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous vehicles with AI-based perception, decision-making, and communication). However, it does not describe any actual incident or harm caused by these systems. Instead, it focuses on safety protocols, emergency response integration, and operational procedures designed to prevent harm. There is no indication of a near-miss or credible risk of harm that is not already addressed. The content aligns with providing supporting information about AI system deployment and safety governance, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Emergency Responders Criticize Unpaid Role Assisting Confused Waymo Vehicles

2026-03-08
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Waymo's autonomous vehicles) whose malfunction or operational issues have directly led to harm or risk to public safety, including traffic congestion, obstruction of emergency vehicles, and dangerous driving behavior. These impacts fall under harm to health and safety of people and disruption of critical infrastructure management (traffic and emergency response). Therefore, this qualifies as an AI Incident due to realized harms caused by the AI system's malfunction or operational failures.