Amazon Zoox Recalls Robotaxis After AI System Crash in Las Vegas

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Amazon's Zoox recalled 270 self-driving vehicles following a Las Vegas incident where an AI-powered robotaxi collided with a passenger vehicle. The crash, caused by a software prediction error in the automated driving system, led to a safety review and the deployment of a critical software update.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a self-driving vehicle fleet operated by Zoox, which uses AI systems for autonomous driving. A software glitch in the AI caused the vehicle to inaccurately predict the behavior of another vehicle, resulting in a collision. This directly led to harm (a car crash) and prompted a recall of 270 vehicles. The AI system's malfunction is a direct contributing factor to the incident, fulfilling the criteria for an AI Incident involving harm to persons or property.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareConsumer services

Affected stakeholders
General publicBusiness

Harm types
Physical (injury)Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Monitoring and quality controlResearch and developmentMaintenance

AI system task:
Recognition/object detectionForecasting/predictionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Amazon's Zoox Recalls Self-Driving Fleet After Las Vegas Crash

2025-05-08
GamblingNews
Why's our monitor labelling this an incident or hazard?
The article describes a self-driving vehicle fleet operated by Zoox, which uses AI systems for autonomous driving. A software glitch in the AI caused the vehicle to inaccurately predict the behavior of another vehicle, resulting in a collision. This directly led to harm (a car crash) and prompted a recall of 270 vehicles. The AI system's malfunction is a direct contributing factor to the incident, fulfilling the criteria for an AI Incident involving harm to persons or property.
Thumbnail Image

Amazon Self-Driving Taxi Recall Affects Hundreds of Zoox Vehicles

2025-05-07
AboutLawsuits.com
Why's our monitor labelling this an incident or hazard?
The Zoox robotaxis are fully autonomous vehicles relying on AI systems for navigation and decision-making. The software defect causing misjudgment of other vehicles' movements is a malfunction of the AI system. The recall was prompted by a real collision incident, demonstrating direct harm linked to the AI system's malfunction. Therefore, this qualifies as an AI Incident because the AI system's malfunction has directly led to harm and safety risks.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls vehicles after self-driving Las Vegas crash By Reuters

2025-05-06
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's autonomous driving technology) whose malfunction led to a crash, which is a direct harm or risk to physical safety. Even though no injuries occurred, the crash itself is a realized incident involving AI malfunction. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm or risk to health and safety, prompting a recall and safety review.
Thumbnail Image

Zoox issues software recall following Las Vegas incident | ADAS & Autonomous Vehicle International

2025-05-08
Autonomous Vehicle International
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous vehicle software) whose malfunction directly caused a collision, resulting in harm to property. Although no injuries occurred, the incident meets the criteria for an AI Incident because the AI system's malfunction led to tangible harm (vehicle damage). The recall and software update are responses to this incident but do not change the classification. Therefore, this is an AI Incident.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls vehicles after self-driving Las Vegas crash

2025-05-06
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's autonomous driving technology) whose malfunction or failure led to a crash, which is a direct harm to property and potential risk to public safety. Even though no injuries occurred, the crash itself constitutes harm to property and disruption, qualifying it as an AI Incident. The recall and software update are responses to this incident but do not change the classification of the event itself.
Thumbnail Image

Amazon's Zoox Recalls 270 Robotaxis After Accident In Las Vegas: Retail's Unmoved By Stocktwits

2025-05-06
Investing.com India
Why's our monitor labelling this an incident or hazard?
The event describes a self-driving car (an AI system) that collided with another vehicle due to inaccurate predictions by its autonomous driving software. This malfunction directly led to a traffic accident, which is a form of harm to property and potential harm to persons. The recall of 270 vehicles and software updates further confirm the AI system's role in the incident. Although no injuries occurred, the collision and operational disruption meet the criteria for an AI Incident.
Thumbnail Image

Amazon's Zoox robotaxi unit issues software recall after recent Las Vegas crash

2025-05-06
CNBC
Why's our monitor labelling this an incident or hazard?
The automated driving system is an AI system as it makes real-time predictions about other vehicles' movements to navigate safely. The incident involved a malfunction of this AI system, which directly led to a collision, constituting harm to property. The event is an AI Incident because the AI system's malfunction caused realized harm (collision) and risk to safety, even though no injuries occurred. The recall is a response to this incident, but the primary event is the AI system malfunction causing harm.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls vehicles after self-driving Las Vegas crash

2025-05-06
Aol
Why's our monitor labelling this an incident or hazard?
The event describes a crash caused by an autonomous vehicle, which is an AI system performing real-time decision-making and navigation. The crash, even without injuries, is a direct harm to property and a safety concern. The recall and software update indicate the AI system's malfunction or failure was a contributing factor. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and malfunction.
Thumbnail Image

Amazon's Zoox issued a robotaxi software recall after a crash in Las Vegas

2025-05-06
The Verge
Why's our monitor labelling this an incident or hazard?
The robotaxi's AI system made an incorrect prediction about the behavior of a nearby vehicle, which directly caused the crash. This qualifies as an AI Incident because the AI system's malfunction led to harm in the form of property damage. Although no injuries occurred, the event fits the definition of an AI Incident due to the direct link between the AI system's malfunction and the crash. The recall and software update are responses to this incident but do not change the classification of the event itself.
Thumbnail Image

Zoox issues software recall for all robotaxis following Las Vegas collision

2025-05-06
engadget
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's automated driving system) whose malfunction or failure contributed to a collision, a safety incident involving physical harm risk. Even though no injuries occurred, the collision itself is a harm event related to the AI system's use. The company's response with a software recall and update confirms the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and the collision.
Thumbnail Image

Amazon-owned Zoox issues recall following robotaxi crash | TechCrunch

2025-05-06
TechCrunch
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it makes real-time predictions and decisions to control the vehicle. The crash was caused by the AI system's inaccurate prediction of another vehicle's behavior, which directly led to a collision and property damage. The recall and pause in testing indicate recognition of the AI system's role in the incident. Although no injuries occurred, the harm to property and the risk to safety qualify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon's Zoox recalls robotaxi for software update after Las Vegas crash

2025-05-06
Fox Business
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's autonomous driving software) whose malfunction (inaccurate prediction of another vehicle's behavior) directly led to a collision, fulfilling the criteria for an AI Incident. The harm includes physical damage to property (vehicles) and operational disruption (temporary halt of driverless operations). Although no injuries occurred, harm to property and disruption are sufficient for classification as an AI Incident. The recall and software update are responses to this incident but do not change the classification.
Thumbnail Image

Amazon's Zoox Recalls 270 Robotaxis After Accident In Las Vegas: Retail's Unmoved

2025-05-06
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle's AI system malfunctioned by inaccurately predicting the behavior of another vehicle, resulting in a collision. This is a direct harm caused by the AI system's malfunction, fulfilling the criteria for an AI Incident. The recall and software update are responses to this incident but do not change the classification. No injuries occurred, but property damage and risk were present, which meets the harm criteria.
Thumbnail Image

Amazon's robotaxi company Zoox recalls driverless software after crash

2025-05-06
Quartz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a driverless robotaxi software that made real-time decisions leading to a collision. The harm includes minor damage to vehicles and operational disruption, which fits the definition of an AI Incident. The company's response to pause operations and update software further confirms the incident's materialization and the AI system's role in causing harm, even if injuries were avoided.
Thumbnail Image

Amazon Zoox Recalls All 270 Robotaxis After Las Vegas Crash; Self-Driving Software Now Under Fix

2025-05-07
Tech Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Zoox's self-driving software—that malfunctioned by inaccurately predicting the behavior of another vehicle, leading to a collision. The harm (a crash) has already occurred, and the company is responding with a software recall. This meets the criteria for an AI Incident because the AI system's malfunction directly caused harm to persons or property (the crash). The recall and fix are responses to the incident, but the primary event is the realized harm caused by the AI system's failure.
Thumbnail Image

Amazon's Zoox robotaxi unit issues software recall after recent Las Vegas crash

2025-05-06
NBC New York
Why's our monitor labelling this an incident or hazard?
The Zoox robotaxi's automated driving system is an AI system responsible for real-time decision-making in vehicle navigation. The crash was caused by a defect in the AI's prediction of another vehicle's movement, directly leading to a collision. This fits the definition of an AI Incident as the AI system's malfunction directly caused harm to property. The recall and update are responses to this incident but do not change the classification.
Thumbnail Image

Amazon's Zoox recalls hundreds of robotaxis to fix critical flaw

2025-05-07
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Zoox's autonomous driving software controlling robotaxis. The recall was necessary due to a critical flaw in the software, indicating a malfunction that could have caused harm. Since autonomous vehicles operate in real-world environments and any critical software flaw can lead to injury or harm to people, this qualifies as an AI Incident under the definition of harm to health of persons resulting from AI system malfunction. The company's issuance of a software update to fix the flaw confirms the AI system's role in the incident.
Thumbnail Image

Amazon's Zoox robotaxi unit issues software recall after recent Las Vegas crash

2025-05-06
NBC 6 South Florida
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the automated driving system of Zoox's robotaxis. The system malfunctioned by inaccurately predicting another vehicle's movement, leading to a collision. Although no injuries occurred, the crash caused property damage, which fits the harm criteria (d) for AI Incidents. The recall and software update are responses to the incident, but the primary event is the AI system's malfunction causing harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Amazon's robotaxi unit Zoox agrees to software recall

2025-05-06
iTnews
Why's our monitor labelling this an incident or hazard?
The Zoox robotaxi's AI system made an inaccurate prediction about another vehicle's behavior, resulting in a collision. This is a direct malfunction of the AI system causing harm to property. The recall and safety review confirm the AI system's role in the incident. Although no injuries occurred, the crash and the need for a recall meet the criteria for an AI Incident due to direct harm caused by the AI system's malfunction.
Thumbnail Image

Amazon's robotaxi unit Zoox agrees to software recall after self-driving Las Vegas crash

2025-05-06
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The Zoox robotaxi is an AI system operating autonomously. The crash was caused by the AI system's inaccurate prediction and inability to avoid collision in a specific driving scenario, constituting a malfunction. Although no injuries occurred, the collision is a harm to property and the recall acknowledges the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and the crash.
Thumbnail Image

Amazon Zoox Robotaxi Software Recall After Crash | Silicon UK

2025-05-06
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's autonomous driving software) that was in use and directly led to a collision with another vehicle, causing minor damage. This fits the definition of an AI Incident because the AI system's malfunction or decision-making contributed to harm (property damage) and required regulatory action (recall and reporting). The incident is not merely a potential hazard or complementary information but a realized event with direct harm linked to the AI system's operation.
Thumbnail Image

Zoox recalls robotaxis after Las Vegas crash, citing software fix

2025-05-08
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving software) whose malfunction directly caused a collision, a form of harm to property and potential harm to persons. The recall and software update are responses to this AI Incident. The presence of the AI system is explicit, the harm (collision) occurred, and the malfunction was identified as the cause. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon's robotaxi company Zoox recalls driverless software after Las Vegas crash

2025-05-07
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Zoox's driverless software) whose malfunction directly led to a crash, which is harm to property and potentially to people. This fits the definition of an AI Incident because the AI system's malfunction caused realized harm. The recall and software update are responses but do not change the classification of the event as an incident.
Thumbnail Image

Robotaxi crash in Las Vegas led to Zoox recall, pause in autonomous vehicle testing

2025-05-07
Las Vegas Review-Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the autonomous driving software in Zoox's robotaxis. The crash was directly caused by the AI system's malfunction in predicting the behavior of another vehicle, leading to a collision and property damage. Although no injuries occurred, the harm to property and the safety risk are clear. The company's response with a software recall and temporary halt in testing further confirms the AI system's role in the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Zoox recalls robotaxis after self-driving crash in Las Vegas

2025-05-07
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Zoox's Automated Driving Systems software) whose malfunction (prediction failure and braking issues) directly led to a vehicle crash. Even though no injuries were reported in this particular crash, the AI system's failure caused a collision, which is a harm to property and poses a risk to health and safety. Additionally, previous incidents involving the same AI system caused injuries, reinforcing the classification as an AI Incident. The recall and ongoing investigations further confirm the AI system's role in causing harm or risk thereof.
Thumbnail Image

Las Vegas crash involving Zoox autonomous vehicle sparks software update

2025-05-07
FOX5 Las Vegas
Why's our monitor labelling this an incident or hazard?
The Zoox robotaxi is an AI system performing autonomous driving. The crash, even though it did not cause injuries, is a realized harm involving property damage and safety risk, directly linked to the AI system's software malfunction in predicting vehicle behavior. The event meets the criteria for an AI Incident because the AI system's malfunction directly led to the crash. The subsequent recall and update are responses to this incident but do not change the classification of the original event.
Thumbnail Image

Zoox resumes Las Vegas robotaxi testing after safety pause

2025-05-09
Las Vegas Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automated driving system in a robotaxi. The software defect caused inaccurate predictions of other vehicles' movements, leading to a collision. Although no injuries occurred, there was property damage, which qualifies as harm under the AI Incident definition. The incident directly resulted from the AI system's malfunction, and the recall and update are responses to this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Zoox Self-Driving Taxis Recalled Following Las Vegas Crash

2025-05-09
IoT World Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the autonomous driving software of Zoox's self-driving taxis. The collision was caused by the AI's inaccurate prediction of another vehicle's behavior, leading to a crash and property damage. Although no injuries occurred, the property damage qualifies as harm under the AI Incident definition. The recall and suspension of testing indicate a malfunction and response to the incident. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's malfunction during use.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production in 2026 for US growth- FT By Investing.com

2025-05-07
Investing.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous vehicles) and their development and use. However, it only discusses scaling production and deployment plans without any mention of accidents, malfunctions, or harms caused by the AI systems. Therefore, it does not meet the criteria for an AI Incident. It also does not explicitly highlight credible risks or warnings about plausible future harm, so it does not qualify as an AI Hazard. The article is primarily informative about the company's plans and industry context, which fits the category of Complementary Information.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production for US expansion, FT reports

2025-05-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles (robotaxis) which are being scaled up for commercial use. While no harm or incident is reported, the expansion of autonomous robotaxis carries plausible risks of future harm such as accidents or disruptions. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from the deployment of AI-powered robotaxis at scale.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production for US expansion, FT reports

2025-05-07
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles (robotaxis) that make real-time decisions for navigation and passenger transport. While no harm or incident is reported, the scaling up of production and commercial deployment of AI-powered robotaxis could plausibly lead to future harms such as accidents or disruptions if the AI systems malfunction or are misused. Therefore, this event represents an AI Hazard due to the credible risk associated with increased deployment of autonomous AI systems in public transportation.
Thumbnail Image

Amazon's robotaxi startup Zoox to scale up production for US expansion: FT report

2025-05-07
Economic Times
Why's our monitor labelling this an incident or hazard?
Zoox's robotaxis are AI systems (autonomous vehicles) whose development and use could plausibly lead to harm such as injury or disruption if malfunctions or accidents occur. The article focuses on scaling production and upcoming commercial deployment, with no mention of actual accidents or harm caused by the AI. The regulatory investigations and easing of safety requirements highlight potential risks but do not confirm any incident. Hence, this is best classified as an AI Hazard, reflecting the credible risk of future harm from the deployment of these AI systems.
Thumbnail Image

Zoox recalls 270 driverless cars following crash in Las Vegas By Investing.com

2025-05-06
Investing.com
Why's our monitor labelling this an incident or hazard?
The event describes a collision involving an autonomous vehicle, which is an AI system. Although no injuries occurred, the crash itself is a harm event (property damage and potential risk to safety). The recall and software update indicate a malfunction or failure in the AI system's operation. Therefore, the AI system's malfunction directly led to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production for US expansion, FT...

2025-05-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles with AI for navigation and decision-making). While the scaling up of robotaxi production and deployment could plausibly lead to future harms (e.g., accidents, safety issues), the article does not report any actual harm or incident. The regulatory investigations and eased rules provide context but do not constitute an incident or hazard by themselves. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and regulatory responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production for US expansion

2025-05-07
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Zoox's autonomous robotaxi) involved in a collision with a passenger vehicle, which is a direct harm event related to the AI system's use. Even though no injuries occurred, the collision is a harm to property and a safety incident. The voluntary recall and software update indicate a malfunction or failure in the AI system's operation. This meets the criteria for an AI Incident as the AI system's use directly led to harm. Other parts of the article provide context but do not negate the incident classification.
Thumbnail Image

Amazon's Zoox to scale up robotaxi production for US expansion

2025-05-07
The Business Times
Why's our monitor labelling this an incident or hazard?
Zoox's robotaxis are AI systems as they involve autonomous driving capabilities. While no harm or incident is reported, the scaling up and commercial rollout of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, injuries) due to AI system malfunction or misuse. The article also references regulatory actions and investigations, indicating recognized potential hazards. Since no actual harm has occurred yet, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Amazon's robotaxi startup Zoox to scale up production for US expansion: FT report

2025-05-07
The Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically autonomous vehicles (robotaxis) that use AI for navigation and operation. While the article describes expansion and scaling of production, it does not mention any realized harm or incidents caused by the AI system. However, the deployment of autonomous vehicles at scale carries plausible risks of harm such as accidents or disruptions, but since no harm or near-harm event is reported, this is a potential future risk rather than an incident. Therefore, this qualifies as an AI Hazard due to the plausible future harm from increased autonomous vehicle operations.
Thumbnail Image

Zoox shifts from prototypes to production in robotaxi rollout

2025-05-07
Proactiveinvestors NA
Why's our monitor labelling this an incident or hazard?
Zoox's robotaxis are AI systems due to their autonomous driving capabilities. The low-speed collision and subsequent recall indicate a malfunction or failure in the AI system's operation. However, since no actual harm (injury, property damage, or other significant harm) is reported, this does not meet the threshold for an AI Incident. The event shows a risk and a response to it but no realized harm, so it is best classified as Complementary Information, providing an update on the AI system's deployment and safety management.
Thumbnail Image

Amazon's Zoox Increases RoboTaxi Production for Expansion in the USA

2025-05-07
るなてち
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI in RoboTaxis) and their development and use. However, the article focuses on production scaling and future deployment plans without any reported harm or malfunction. While there are plausible future risks related to safety and regulatory challenges, these are speculative and not described as imminent or realized harms. Therefore, this qualifies as an AI Hazard only if the potential for harm is clearly stated and imminent, but here it is more of a general discussion of future challenges and competition. Since no specific plausible harm or incident is described, and the article mainly provides an update on AI system deployment and market competition, it is best classified as Complementary Information, providing context and updates on AI ecosystem developments without reporting an incident or hazard.
Thumbnail Image

Amazon's Zoox to Ramp Up Robotaxi Production Ahead of U.S. Launch - EconoTimes

2025-05-07
EconoTimes
Why's our monitor labelling this an incident or hazard?
Zoox's autonomous vehicles are AI systems as they perform complex autonomous driving tasks. The article focuses on production scale-up and planned service launches, with no mention of accidents, malfunctions, rights violations, or other harms. Therefore, no AI Incident is reported. However, the expansion and deployment of autonomous vehicles could plausibly lead to future harms such as accidents or disruptions, making this an AI Hazard. Since the article does not describe any actual harm or incident, it is not an AI Incident. It is not merely complementary information because the main focus is on the potential future impact of scaling autonomous vehicle deployment, not on responses or updates to past incidents. Hence, the classification is AI Hazard.
Thumbnail Image

Zoox暂停测试一周

2025-05-07
中关村在线
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Zoox's autonomous driving software—that malfunctioned by mispredicting another vehicle's trajectory, leading to a collision. Although no injuries occurred, the collision caused property damage and prompted a recall and testing suspension, indicating direct harm linked to the AI system's malfunction. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (property damage) and safety risks. The recall and investigation are responses to this incident but do not change the classification.
Thumbnail Image

今年第二次,亚马逊的 Zoox 召回 270 辆无人驾驶出租车

2025-05-09
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: Zoox's autonomous driving software. The collision was caused by the AI system's incorrect prediction of another vehicle's behavior, leading to a crash. Although no injuries occurred, the incident caused property damage and posed safety risks. The recall and suspension indicate recognition of the AI system's malfunction contributing to harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction during its use.
Thumbnail Image

亚马逊 Zoox 自动驾驶车辆碰撞事故引发召回,暂停测试超一周

2025-05-07
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Zoox's autonomous driving software—that malfunctioned by incorrectly predicting another vehicle's behavior, leading to a collision. Although no injuries occurred, the collision caused property damage and posed safety risks. The recall and testing pause indicate the AI system's malfunction was a direct contributing factor. Therefore, this qualifies as an AI Incident due to realized harm (property damage and safety risk) caused by the AI system's malfunction during its use.
Thumbnail Image

智通财经APP获悉,据报道,亚马逊(AMZN.US)旗下自动驾驶初创公司Zoox计划明年扩大生产规模,加速推进其自动驾驶出租车车队在美国的商业化运营。Zoox联合创始人Jesse Levinson表示,该公司将在加州湾区新建一处生产场......

2025-05-07
证券之星
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (autonomous driving technology) in development and use. However, it only discusses plans for expansion and commercial operation, along with regulatory context, without any reported harm or malfunction. Since no actual harm has occurred, but the expansion of autonomous vehicles could plausibly lead to incidents in the future, this fits the definition of an AI Hazard rather than an Incident. It is not Complementary Information because it is not primarily about responses or updates to past incidents, nor is it unrelated as it directly concerns AI system deployment and potential risks.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls vehicles after self-driving Las Vegas crash

2025-05-06
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Zoox's self-driving technology) whose malfunction or failure caused a crash, a direct harm to property and potentially to persons. The recall is a response to this incident. Since harm has occurred due to the AI system's use, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm.