Zoox Recalls 258 Vehicles Over AI Braking Malfunction

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Amazon's autonomous vehicle unit, Zoox, recalled 258 cars due to an AI malfunction in its automated driving system causing unexpected hard braking. The issue, affecting software versions before November 5, led to incidents with motorcyclists and posed potential hazards, prompting an immediate software update.[AI generated]

Why's our monitor labelling this an incident or hazard?

The automated driving system is an AI system controlling vehicle behavior. The unexpected hard braking caused by the AI system led to two rear-end collisions injuring motorcyclists, which is direct harm to persons. The recall and software update are responses to this harm. Therefore, this event qualifies as an AI Incident due to the AI system's malfunction causing injury.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (injury)Economic/PropertyReputational

Severity
AI incident

Business function:
Monitoring and quality controlMaintenance

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Amazon's robotaxi unit Zoox agrees recall over braking issue

2025-03-19
CNA
Why's our monitor labelling this an incident or hazard?
The automated driving system is an AI system controlling vehicle behavior. The unexpected hard braking caused by the AI system led to two rear-end collisions injuring motorcyclists, which is direct harm to persons. The recall and software update are responses to this harm. Therefore, this event qualifies as an AI Incident due to the AI system's malfunction causing injury.
Thumbnail Image

Zoox Forced to Recall 258 Self-Driving Vehicles After Safety Probe

2025-03-19
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The automated driving system is an AI system as it makes real-time decisions to control the vehicle. The unexpected hard braking incidents caused collisions and injuries, which constitute direct harm to persons. The recall and software update are responses to this malfunction. Therefore, this event qualifies as an AI Incident because the AI system's malfunction directly led to injury, fulfilling the criteria for harm to health of persons.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls 258 vehicles over unexpected braking issue

2025-03-19
Reuters
Why's our monitor labelling this an incident or hazard?
The automated driving system (ADS) is an AI system controlling vehicle behavior. The unexpected hard braking issue represents a malfunction in the AI system that could plausibly lead to harm, such as accidents or injuries, if not addressed. Since the recall and software update are proactive measures to prevent harm, and no actual harm is reported, this event fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI in the malfunction and the potential for harm justify classifying this as an AI Hazard.
Thumbnail Image

Zoox recalls 258 self-driving cars over unexpected braking

2025-03-19
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The recall is due to the autonomous driving system's malfunction causing unexpected hard braking, which directly led to collisions with motorcyclists. The AI system's failure in real-world operation caused or contributed to harm, meeting the criteria for an AI Incident under injury or harm to persons. The recall and software update are responses to this harm, but the primary event is the malfunction causing harm.
Thumbnail Image

Amazon's Zoox recalls 258 vehicles due to automated driving system issues By Investing.com

2025-03-19
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the automated driving system in autonomous vehicles) whose malfunction directly led to a safety hazard that could cause harm to people (sudden hard braking could lead to accidents or injuries). Since the harm is directly linked to the AI system's malfunction and has materialized as a recall, this qualifies as an AI Incident.
Thumbnail Image

Zoox recalls 258 self-driving cars over braking concerns By Investing.com

2025-03-19
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the automated driving system in self-driving cars. The malfunction (unexpected hard braking) could directly lead to injury or harm to persons, which fits the definition of an AI Incident. The recall and software update are responses to this realized or potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon's Robotaxi Unit Zoox Recalls 258 Vehicles Over Unexpected Braking Issue

2025-03-19
US News & World Report
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (automated driving system) whose malfunction (unexpected hard braking) could directly lead to harm (injury to passengers, pedestrians, or other road users). The recall and software update are responses to this malfunction. Since the article does not report actual harm occurring but addresses a malfunction that could cause harm, this qualifies as an AI Hazard rather than an AI Incident. The AI system's malfunction is the central issue, and the recall aims to mitigate plausible future harm.
Thumbnail Image

Amazon's robotaxi unit Zoox recalls 258 vehicles over unexpected braking issue

2025-03-19
ThePrint
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it performs complex real-time decision-making for vehicle control. The unexpected braking issue is a malfunction of this AI system. Although no harm or injury is reported, the malfunction could plausibly lead to an AI Incident if not addressed, as unexpected braking could cause accidents or injuries. The recall and software update are mitigation measures. Since no harm has occurred yet, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the recall and the potential risk rather than reporting an actual incident of harm.
Thumbnail Image

Zoox recalls software on 258 self-driving cars over unexpected braking | TechCrunch

2025-03-19
TechCrunch
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system controlling vehicle behavior. The recall is due to a malfunction (unexpected hard braking) that directly led to collisions with motorcyclists, which is harm to persons. The involvement of the AI system in these incidents is explicit and direct. The recall and software update are responses to these incidents. Hence, this event meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm.
Thumbnail Image

Amazon's robotaxi unit Zoox agrees recall over braking issue

2025-03-19
The Economic Times
Why's our monitor labelling this an incident or hazard?
The self-driving software is an AI system controlling vehicle behavior. Its malfunction caused unexpected hard braking, which directly led to rear-end collisions injuring motorcyclists, fulfilling the criterion of harm to persons. The recall and investigation by NHTSA confirm the AI system's role in causing harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Zoox Recalls ADS Software for Braking Issue

2025-03-19
The BRAKE Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Automated Driving System software) whose malfunction (unexpected hard braking) directly led to a safety risk that could cause harm (rear-end collisions). Although no actual crashes are reported, the risk of harm was real and significant enough to prompt a recall and software update. This qualifies as an AI Incident because the AI system's malfunction directly led to a safety hazard with potential for injury or harm to people, and the recall addresses a realized safety issue rather than a mere potential hazard.
Thumbnail Image

Amazon's robotaxi unit Zoox agrees recall over braking issue

2025-03-19
ThePrint
Why's our monitor labelling this an incident or hazard?
The autonomous driving AI system malfunctioned by overcautiously braking in certain scenarios, causing two rear-end collisions that injured motorcyclists. The involvement of the AI system is explicit, and the harm (injury) has materialized. The recall and investigation by NHTSA further confirm the direct link between the AI system's malfunction and the harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Amazon's Self-Driving Nightmare: Zoox Recall Sparks Fresh Fears for the Future of Autonomy

2025-03-19
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the automated driving system in Zoox's autonomous vehicles) whose malfunction (software issue causing sudden braking) has led to a recall, indicating a direct safety risk. This qualifies as an AI Incident because the AI system's malfunction has directly led to a harm risk (sudden braking could cause accidents or injuries). The article also references past incidents involving AI-driven vehicles causing accidents, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Zoox Addresses Safety Concerns with Software Recall of 258 Self-Driving Vehicles

2025-03-19
bbntimes.com
Why's our monitor labelling this an incident or hazard?
The autonomous driving system (ADS) is an AI system as it performs complex real-time decision-making for vehicle control. The reported incidents involved the AI system's malfunction causing sudden hard braking, which directly led to rear-end collisions and minor injuries, constituting harm to persons. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly caused harm. The recall and software update are mitigation measures but do not change the classification of the original harm event.
Thumbnail Image

Over 250 Zoox robotaxis recalled due to automated driving system issues, company says

2025-03-19
ABC7 San Francisco
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automated driving system) malfunction that has directly led to a safety concern (unexpected hard braking) in autonomous vehicles. The recall is a response to this malfunction to prevent potential harm. Since the malfunction has been identified and vehicles recalled, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and potential harm to people or property.
Thumbnail Image

Zoox recalls robotaxis over unexpected braking issue following US investigation

2025-03-20
Wion
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle software is an AI system as it performs real-time decision-making for driving. The software flaws caused excessive braking, which directly led to motorcyclist collisions, constituting injury or harm to persons. The recall and software update are responses to this harm. Therefore, this event qualifies as an AI Incident because the AI system's malfunction directly caused harm to people.
Thumbnail Image

Zoox CTO: Why Self-Driving Cars Are So Hard to Get Right | PYMNTS.com

2025-03-21
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (autonomous vehicle control system) that malfunctioned, causing unexpected braking leading to two accidents and minor injuries. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to persons and property. The article also discusses the technical and operational challenges of achieving safe autonomy, reinforcing the AI system's role in the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

亚马逊旗下自动驾驶部门Zoox召回258辆汽车

2025-03-19
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it makes real-time decisions to control the vehicle. The malfunction (unexpected braking) directly relates to the AI system's operation and could have caused harm to vehicle occupants or other road users. The recall and software update indicate the issue was identified and mitigated. Since the malfunction could have led to injury or harm but was addressed before reported incidents, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

因刹车隐患,自动驾驶出租车公司Zoox召回258辆汽车

2025-03-20
auto.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The autonomous driving system (an AI system) malfunctioned, causing a safety hazard that could lead to accidents or injuries. The recall indicates the issue was serious enough to warrant corrective action. This fits the definition of an AI Incident because the AI system's malfunction has directly led to a risk of harm to people, and the recall is a response to that harm or risk. The event is not merely a potential hazard since the defect was identified and led to a recall, indicating realized or imminent harm.
Thumbnail Image

仙途智能Radar24技术解析:单车到云端协同如何实现自动驾驶安全升级

2025-03-21
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Radar24 and WiAction APP) for autonomous vehicle operation and safety. However, the article does not report any realized harm or incidents caused by these AI systems. Instead, it highlights the safety enhancements and operational benefits provided by these AI technologies. There is no indication of an AI incident or hazard occurring or imminent. The content primarily provides complementary information about AI system capabilities, deployment, and their role in improving autonomous driving safety and efficiency, which fits the definition of Complementary Information.
Thumbnail Image

全国人大代表朱华荣建议加快完善自动驾驶系统立法

2025-03-20
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on legislative and regulatory recommendations for autonomous driving systems, which are AI systems. However, it does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where harm occurred or was narrowly avoided. Instead, it highlights potential risks and the need for governance to manage those risks. Therefore, it fits the category of Complementary Information as it provides context and governance-related developments regarding AI systems without reporting a new AI Incident or AI Hazard.
Thumbnail Image

亚马逊旗下自动驾驶公司召回258辆车,应对系统意外制动问题

2025-03-20
环球网
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it makes real-time decisions for vehicle control. The malfunction caused actual harm (injury to a motorcyclist) through unintended emergency braking, which directly links the AI system's use and malfunction to physical harm. Therefore, this qualifies as an AI Incident under the definition of harm to persons resulting from AI system malfunction.
Thumbnail Image

美网约车巨头Lyft今夏将提供无人打车服务 人类驾驶员也不会失业 - cnBeta.COM 移动版

2025-03-21
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) and their planned use, but no harm or incident has occurred yet. The article focuses on Lyft's announcement and plans, competitive context, and potential future job impacts, without reporting any realized harm or credible immediate risk of harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context and updates on AI deployment in ride-hailing services without describing an incident or hazard.
Thumbnail Image

智通财经APP获悉,Lyft公司(LYFT.US)表示,计划"最快今年夏天"在其平台上推出无人驾驶车辆,并预计随着自动驾驶服务的普及,人类驾驶员将逐渐转向车队管理等其他工作。由于在提供自动驾驶服务方面落后于竞争对手,......

2025-03-21
finance.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems (autonomous driving technology) in a commercial ride-hailing platform. However, the article does not describe any realized harm or incident caused by these AI systems. Instead, it focuses on the planned introduction, competitive context, and potential future impacts on employment. There is no indication of injury, rights violations, property harm, or other direct or indirect harms caused by the AI system at this stage. The article also does not highlight any credible imminent risk or hazard from the AI deployment. Therefore, this is a report on AI system deployment plans and industry developments without harm or plausible harm occurring or imminent, fitting the category of Complementary Information.
Thumbnail Image

亚马逊召回258辆Zoox自动驾驶汽车,因车辆可能存在意外急刹问题

2025-03-19
tech.ifeng.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving system (ADS) in Zoox vehicles. The malfunction (unexpected emergency braking) directly relates to the AI system's operation and could cause harm to persons or property. Since the recall is due to a safety-critical AI malfunction that could lead to injury or accidents, this qualifies as an AI Incident under the definition of harm to health or injury caused by AI system malfunction.
Thumbnail Image

因自动驾驶系统问题 亚马逊(AMZN.US)旗下Zoox召回258辆汽车

2025-03-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the automated driving system) malfunction that caused unintended braking behavior, which could have led to injury or harm to people (pedestrians, cyclists, motorcyclists). The recall and software update indicate the issue was identified and addressed. Since no actual harm is reported, but the malfunction posed a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete malfunction and recall due to safety concerns, not just a response or update to a past incident.
Thumbnail Image

亚马逊旗下自动驾驶公司因系统缺陷召回258辆汽车

2025-03-19
news.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The autonomous driving system (ADS) is an AI system as it performs real-time decision-making for vehicle control. The defect causing unexpected emergency braking is a malfunction of this AI system. The recall is due to safety risks and actual incidents (e.g., motorcycle rear-end collisions) linked to this defect, indicating realized harm or at least direct risk of harm to persons. Therefore, this qualifies as an AI Incident because the AI system's malfunction has directly or indirectly led to harm or safety incidents.
Thumbnail Image

因刹车隐患,自动驾驶出租车公司Zoox召回258辆汽车

2025-03-20
auto.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system—the automated driving system (ADS)—in Zoox's autonomous vehicles. The malfunction (software issue causing unexpected hard braking) directly relates to a safety hazard that could lead to injury or accidents, which is a harm to health and safety. The recall of vehicles and software update indicate the harm is realized or imminent, not just potential. Therefore, this qualifies as an AI Incident because the AI system's malfunction has directly led to a safety risk requiring corrective action.
Thumbnail Image

视频 ▏辅助驾驶不是分心挡箭牌,安全驾驶不要心存侥幸

2025-03-21
yangtse.com
Why's our monitor labelling this an incident or hazard?
The adaptive cruise control system is an AI system that assists driving by maintaining speed and distance. The driver relied on this system and became distracted, failing to react in time to avoid a collision. This misuse of the AI system directly contributed to the accident and property damage, fulfilling the criteria for an AI Incident due to indirect harm caused by AI system use. The article explicitly links the accident to the AI system's involvement and driver behavior, confirming the incident classification. The other case involving child safety is a traffic violation but not AI-related, so it does not affect the classification.