Tesla Robotaxis Involved in Multiple Crashes in Austin Despite Safety Monitors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's autonomous Robotaxi service in Austin, Texas, has experienced at least four reported crashes since its launch, despite having safety monitors present. The incidents, reported to the NHTSA, highlight ongoing safety concerns as Tesla expands its driverless service and plans to remove safety moderators in the future.[AI generated]

Why's our monitor labelling this an incident or hazard?

Tesla's Robotaxi service uses an AI-based autonomous driving system, explicitly mentioned as ADS. The crashes reported involve the AI system's operation leading directly to property damage, fulfilling the harm criteria. The presence of safety monitors indicates the system's malfunction or failure to prevent harm. The report compares Tesla's incident rate unfavorably to Waymo's, emphasizing the safety concerns. Hence, this is an AI Incident as the AI system's use has directly led to harm.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Economic/Property

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla 'Robotaxis' keep crashing despite 'safety monitors'

2025-10-29
Electrek
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service uses an AI-based autonomous driving system, explicitly mentioned as ADS. The crashes reported involve the AI system's operation leading directly to property damage, fulfilling the harm criteria. The presence of safety monitors indicates the system's malfunction or failure to prevent harm. The report compares Tesla's incident rate unfavorably to Waymo's, emphasizing the safety concerns. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Tesla comes through on huge promise for Bay Area ride-hailing service

2025-10-27
TESLARATI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving platform) in use for ride-hailing services. However, the article does not report any realized harm or incident caused by the AI system, nor does it highlight any credible or imminent risk of harm. It mainly reports on the expansion and regulatory approval of the service, which is a development update. Therefore, this is best classified as Complementary Information, as it provides context and updates on the AI system's deployment without describing an AI Incident or AI Hazard.
Thumbnail Image

Elon Musk: Tesla autonomous driving might spread faster than any tech

2025-10-29
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Tesla's autonomous driving system) and discusses its deployment and expansion. However, there is no mention of any harm, malfunction, or violation caused by the AI system. The discussion is about potential rapid adoption and expansion, which could plausibly lead to future incidents or hazards, but no such harm has occurred yet. Thus, the event fits the definition of an AI Hazard, as it could plausibly lead to incidents in the future once full autonomy is enabled and widely deployed.
Thumbnail Image

Tesla expands Austin robotaxi service area for fourth time

2025-10-29
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxi service involves AI systems for autonomous driving, which fits the definition of an AI system. The expansion and removal of safety moderators relate to the use of the AI system. However, since no accidents or harms have been reported, and the article focuses on the expansion and future plans rather than any realized harm, this event represents a plausible risk of harm rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the credible potential for future harm from increased autonomous vehicle deployment without safety drivers.
Thumbnail Image

Waymo makes major move in battle for dominance with Tesla: 'Hundreds of thousands of rides'

2025-10-30
The Cool Down
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems (autonomous vehicles) and references safety concerns, it does not describe any specific event where the AI system's development, use, or malfunction has directly or indirectly led to harm or violation of rights. The safety concerns are mentioned generally without concrete incidents or evidence of harm. The article primarily provides information about the current state and future plans of autonomous taxi services, which fits the description of Complementary Information as it enhances understanding of the AI ecosystem without reporting a new incident or hazard.
Thumbnail Image

Tesla stock pressured amid tech weakness and robotaxi rollout concerns By Investing.com

2025-10-30
Investing.com
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxi service relies on AI for autonomous driving, so the AI system's development and use are involved. The article highlights delays and regulatory hurdles, indicating potential future risks but no realized harm or incident. Therefore, this situation represents a plausible future risk (AI Hazard) rather than an incident or complementary information about responses or ecosystem developments.
Thumbnail Image

How frequently Tesla Robotaxis and Waymo vehicles crash

2025-10-30
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: Tesla Robotaxis and Waymo vehicles operate with autonomous driving systems that use AI to navigate and control the vehicles. The crashes reported are incidents where the AI system's use has directly or indirectly led to harm (crashes), which can cause injury or property damage. Therefore, these crashes qualify as AI Incidents because the AI system's use has led to realized harm (vehicle crashes). The article provides data on actual crashes, not just potential risks, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the crash incidents themselves, not on responses or broader ecosystem context. Hence, the classification is AI Incident.
Thumbnail Image

Tesla Robotaxis are crashing more than Waymo, even with human safety monitors

2025-10-30
Mashable SEA
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxis are AI systems operating autonomously to drive vehicles. The reported crashes, including one into a fixed object, are direct harms caused by the AI system's malfunction or failure to safely navigate. The presence of human safety monitors does not negate the AI system's role in causing these incidents. The crashes represent harm to property and potential risk to human safety, fulfilling the criteria for an AI Incident. The comparison with Waymo's system provides context but does not change the classification of Tesla's crashes as incidents.
Thumbnail Image

Tesla's scaled-back robotaxi timeline is lagging in regulatory approval

2025-10-30
Electrek
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) and their use, but no direct or indirect harm has occurred yet. The article mainly reports on regulatory delays and the company's failure to meet its own timelines, which could plausibly lead to future harm if the technology is deployed prematurely. However, since no harm or incident has materialized, and the focus is on the regulatory and developmental status, this fits best as Complementary Information. It provides context and updates on the AI system's deployment challenges without describing an AI Incident or AI Hazard.
Thumbnail Image

Tesla expands Robotaxi geofence, but not the garage

2025-10-30
TESLARATI
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service involves AI systems for autonomous driving, so AI system involvement is clear. However, the article does not describe any harm or risk of harm caused or plausibly caused by the AI system. The user frustration and lack of transparency about fleet size do not constitute harm under the definitions provided. The article mainly provides operational updates and user sentiment, which enhances understanding of the AI ecosystem but does not report an incident or hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

New data reveals how often Tesla's Robotaxis hit trouble

2025-10-30
ArenaEV.com
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi system qualifies as an AI system due to its autonomous driving capabilities at a high automation level. The reported crashes are incidents where the AI system's use has directly led to harm to property (damage from collisions). Although no injuries are reported, property damage is a recognized harm under the AI Incident definition. The presence of a human safety monitor who can intervene does not negate the AI system's role in these incidents, as the crashes occurred while the AI was in control. Therefore, these events constitute AI Incidents due to realized harm caused by the AI system's operation. The article does not describe potential future harm alone, nor is it merely complementary information or unrelated news.
Thumbnail Image

Tesla's Robotaxis are already crashing in Austin, data points to gaps in self-driving system

2025-10-31
TechSpot
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxis are AI systems operating at Level 4 autonomy, performing all driving functions within a defined area. The reported crashes, including collisions with fixed objects causing property damage, demonstrate that the AI system's use has directly led to harm. The presence of human safety monitors does not negate the AI's role in these incidents. The article details realized harm (property damage) caused by the AI system's malfunction or failure, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Tesla Accused of Covering Up Details of New Robotaxi Crash

2025-11-02
Futurism
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi service is an AI system performing autonomous driving tasks. The crashes reported are direct consequences of the AI system's operation, causing property damage (harm to property). The presence of a human safety monitor and teleoperators indicates the AI system's role in navigation and control. The repeated crashes and redaction of information suggest ongoing issues with the AI system's safety and transparency, fulfilling the criteria for an AI Incident. The harm is realized (property damage), and the AI system's malfunction or limitations are a contributing factor. Hence, the event is classified as an AI Incident.
Thumbnail Image

Tesla Robotaxi test units spotted in new region ahead of launch

2025-10-31
TESLARATI
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi program involves AI systems for autonomous driving, which are being tested and expanded. While the article mentions the use of AI systems and testing activities, it does not describe any injury, rights violations, property damage, or other harms caused or plausibly caused by these AI systems. The article is primarily informative about ongoing development and testing, without indicating any realized or imminent harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system deployment and testing.
Thumbnail Image

Tesla to ramp to 500 Robotaxis in Austin, 1,000 in Bay Area, by end of 2025: Musk

2025-11-01
TESLARATI
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi program clearly involves AI systems for autonomous vehicle operation. The planned expansion and removal of safety drivers indicate increased reliance on AI. However, there is no mention of any accidents, injuries, rights violations, or other harms caused by the AI system so far. The article focuses on future scaling and cautious deployment, implying potential risks but no current harm. Therefore, this event represents a plausible future risk scenario where AI use could lead to harm if issues arise, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Tesla job listings hint at 24/7 Robotaxi operations in several states

2025-11-03
TESLARATI
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service involves AI systems for autonomous driving, which is clearly an AI system. The job listings and CEO statements indicate preparation for a large-scale rollout, which could plausibly lead to AI hazards in the future due to the nature of autonomous vehicles operating in public spaces. However, since no harm, malfunction, or credible near-miss event is reported, and the article focuses on company plans and scaling efforts, this is best classified as Complementary Information. It provides context and updates on AI deployment without describing an AI Incident or AI Hazard at this time.
Thumbnail Image

Tesla Robotaxi Crashes in Austin: Fourth Incident, Redacted NHTSA Reports

2025-11-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi operates using AI-based autonomous driving technology (FSD software), which is explicitly mentioned. The crashes have resulted in property damage and at least one injury, fulfilling the harm criteria. The AI system's failure to safely navigate or detect obstacles is a direct cause of these incidents. The redaction of information by Tesla, while concerning, is secondary to the fact that harm has occurred due to the AI system's malfunction or limitations. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla's Robotaxi Fleet Shows Higher Crash Rate Than Waymo Despite Human Oversight

2025-11-03
Technology Org
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxis are AI systems used for autonomous driving. The crashes reported are incidents where the AI system's use has directly led to harm (collisions), fulfilling the criteria for an AI Incident. The presence of human safety monitors does not negate the AI system's role in causing these crashes. The comparison with Waymo highlights the relative safety performance but does not change the classification. Therefore, this event qualifies as an AI Incident due to realized harm from the AI system's malfunction or use.
Thumbnail Image

Tesla to add more than 1,000 cars to its Robotaxi fleet

2025-11-03
The Driven
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service clearly involves AI systems (full self-driving capable vehicles). The article discusses the expansion and operational progress of these AI systems but does not describe any harm or incidents caused by them, nor does it indicate a credible risk of harm. The focus is on deployment scale and future plans, which fits the definition of Complementary Information as it enhances understanding of AI developments and ecosystem evolution without reporting new incidents or hazards.
Thumbnail Image

¿Puede Tesla tener 1.500 robotaxis para fin de año? Por Investing.com

2025-11-08
Investing.com Español
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on Tesla's future plans and projections for deploying autonomous robotaxis, which are AI systems. While it acknowledges risks and regulatory scrutiny, it does not describe any actual harm or incident caused by the AI systems. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual and strategic information about AI deployment and market expectations, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem and ongoing developments without reporting a specific harm or plausible immediate harm event.
Thumbnail Image

Tesla tiene una base secreta donde forma a su propio ejército de robots llevando hasta el cansancio extremo a personas que no paran de repetir los mismos gestos: "Es como ser un ratón de laboratorio"

2025-11-06
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Optimus humanoid robots trained via data collected from human workers). The event stems from the use and development of the AI system. However, no actual harm (physical injury, rights violation, property damage, or community harm) caused by the AI system is reported. The harsh working conditions relate to human labor practices rather than AI system malfunction or misuse. There is no indication that the AI system's development or use has directly or indirectly led to an AI Incident or that it plausibly could lead to harm imminently (AI Hazard). The main focus is on the data collection process and the challenges in training the AI, which is informative and contextual. Hence, the classification is Complementary Information.
Thumbnail Image

De una red de taxis autónomos a un 'ejército' de robots: los retos tecnológicos de Elon Musk para cobrar el 'superbonus' de Tesla

2025-11-08
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems (Tesla's FSD autonomous driving software, robotaxis, and humanoid robots with advanced AI). It reports actual incidents where robotaxis have violated traffic laws and are under investigation, indicating realized harm or risk to public safety (harm to persons and communities). The challenges in AI system development and deployment, including regulatory and safety issues, are also detailed. These factors meet the criteria for an AI Incident because the AI system's use and malfunction have directly or indirectly led to harm or risk of harm. Although future hazards are discussed, the presence of actual incidents takes precedence in classification.
Thumbnail Image

Elon Musk predice el fin de las cárceles: "En el futuro, los criminales serán perseguidos por mis robots"

2025-11-07
El Confidencial
Why's our monitor labelling this an incident or hazard?
The article describes speculative future uses of AI and robotics, including continuous monitoring of criminals by AI robots and fully autonomous vehicles, but does not report any actual incidents or harms caused by these systems. The statements are visionary and promotional, lacking evidence of realized harm or direct involvement of AI systems in harmful events. Therefore, the event is best classified as an AI Hazard due to the plausible future risks associated with such AI applications, but not an AI Incident or Complementary Information since no harm or governance response is reported.
Thumbnail Image

Ni prisión ni libertad: Elon Musk propone soltar a los criminales y darles un robot Optimus para que los vigile 24/7

2025-11-07
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves the potential use of AI systems (Optimus robots) for continuous surveillance and control of individuals, which could plausibly lead to violations of human rights and harm to personal freedom. Since the robots are not yet capable of performing these tasks and no harm has materialized, this constitutes a plausible future risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

¿Podrá Tesla tener 1,500 robotaxis para fin de año? Por Investing.com

2025-11-08
Investing.com México
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxis are AI systems (autonomous vehicles) whose deployment is planned and expanding. The article highlights potential regulatory and operational challenges but does not describe any actual harm or incidents resulting from their use. The mention of risks related to demand, execution, and regulatory scrutiny implies potential future issues but no current incident. Hence, this qualifies as an AI Hazard, reflecting plausible future harm from the deployment of AI systems rather than an AI Incident or Complementary Information.
Thumbnail Image

Elon Musk advierte que en el futuro "nadie irá a la cárcel", sus robots Optimus acompañarán a los criminales

2025-11-09
infobae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Optimus humanoid robots) and its proposed future use in criminal justice. However, the use is hypothetical and not yet implemented, so no direct or indirect harm has occurred. The article outlines a potential future application that could plausibly lead to harm or benefits but does not describe any realized harm or incident. Therefore, it fits the definition of an AI Hazard, as the development and intended use of AI-powered robots for constant surveillance of criminals could plausibly lead to significant societal and rights-related harms in the future if implemented improperly. It is not Complementary Information because the main focus is not on responses or updates to an existing incident, nor is it unrelated since it clearly involves AI systems and their potential impact.
Thumbnail Image

Tesla responds to Waymo CEO's call, shares self-driving safety data

2025-11-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article describes Tesla releasing safety performance data about its AI-based driver-assistance system. There is no mention of any actual harm, accident, or malfunction caused by the AI system, nor any indication of plausible future harm beyond the general risks inherent in driving. The event is primarily about sharing information and transparency, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Tesla releases safety report after Waymo co-CEO's comments - Cryptopolitan

2025-11-15
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving software is an AI system involved in advanced driver-assistance. The article centers on Tesla releasing safety data in response to calls for transparency, comparing collision rates with national averages and Waymo's data. There is no mention of any actual harm caused by the AI system, nor any imminent risk or hazard described. The event is primarily about providing information and responding to criticism, which fits the definition of Complementary Information as it enhances understanding of AI system impacts and governance without reporting new harm or risk.
Thumbnail Image

Tesla's Safety Showdown: New Data Drop Fuels Autonomous Driving Debate

2025-11-15
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's autonomous driving system is an AI system explicitly mentioned as being in use. The article details actual crashes and traffic violations involving Tesla vehicles equipped with FSD software, which constitute harm to persons and public safety. The ongoing NHTSA investigation and multiple reports of incidents confirm that harm has occurred and is linked to the AI system's use and possible malfunction. The article does not merely discuss potential risks or future hazards but reports on realized harms and regulatory responses. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla releases detailed safety report after Waymo co-CEO called for more data - RocketNews

2025-11-14
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article centers on Tesla's publication of safety performance data for its AI-driven driver-assistance system and the call for transparency from a competitor. There is no mention of any accident, injury, or violation caused by the AI system, nor any near-miss or credible risk event. The content is primarily about reporting and transparency, which fits the definition of Complementary Information as it provides context and updates on AI system performance and governance without describing a new incident or hazard.
Thumbnail Image

New report reveals shocking comparison between Tesla and Waymo -- here are the details

2025-11-17
The Cool Down
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service employs AI systems for autonomous driving, which have been involved in multiple crashes since launch. These crashes represent direct safety harms linked to the AI system's operation. The presence of a human safety monitor does not negate the AI system's role in these incidents. The article explicitly reports realized harm (crashes) caused by the AI system's use, meeting the criteria for an AI Incident. The environmental and regulatory concerns further contextualize the impact but do not change the classification.
Thumbnail Image

Tesla Robotaxi had 3 more crashes, now 7 total

2025-11-17
Electrek
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service is an AI system operating autonomous vehicles. The reported crashes are incidents where the AI system's use has directly or indirectly led to harm (vehicle collisions). Although supervisors are present, the AI system's performance and crash rate indicate malfunction or failure to prevent harm. The crashes have been reported to a regulatory body (NHTSA), and the harm includes potential injury and property damage. The lack of detailed public reporting does not negate the incident classification. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Tesla Robotaxi Delivers Safe, Affordable Rides in Silicon Valley

2025-11-18
Chosun.com
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi is an AI system employing autonomous driving technology. The article details its use and some past safety issues but does not report any actual injury, property damage, or rights violations resulting from its operation. The presence of an employee onboard and regulatory oversight further indicate that harm has not materialized. The described wrong turns and center line crossings are potential safety risks that could plausibly lead to harm in the future. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

Tesla safety driver falls asleep during passenger's robotaxi ride

2025-11-18
Ars Technica
Why's our monitor labelling this an incident or hazard?
The Tesla robotaxi service is an AI system providing autonomous driving capabilities. The safety driver is a human overseer meant to intervene if the AI system fails. The safety driver falling asleep during the ride is a malfunction in the use of the AI system's safety protocol, directly increasing the risk of injury or harm to the passenger and others. The event describes a concrete incident where the safety driver was asleep multiple times, which is a direct failure in the AI system's safe operation. This meets the criteria for an AI Incident because the AI system's use and oversight failure has directly led to a significant safety risk and potential harm.
Thumbnail Image

Tesla Robotaxi 'safety driver' caught sleeping on video

2025-11-18
Electrek
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi uses an AI system for autonomous driving, with a safety driver as a critical fallback. The safety driver falling asleep multiple times during a ride, despite the AI's driver monitoring system, indicates a failure or malfunction in the AI safety system. This failure directly endangers passenger safety, fulfilling the criteria for harm to persons (potential injury). The event is not merely a potential hazard but an actual incident demonstrating a safety risk caused by the AI system's malfunction and human oversight failure. Hence, it is classified as an AI Incident.
Thumbnail Image

Passenger Alarmed When Tesla Robotaxi "Safety" Driver Falls Completely Asleep at the Wheel

2025-11-18
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Tesla's autonomous driving Robotaxi) in active use. The safety driver, who is supposed to supervise and intervene if the AI system malfunctions or encounters a situation it cannot handle, fell asleep multiple times, which is a malfunction in the human oversight of the AI system. This directly endangers passenger safety, fulfilling the criterion of harm to persons. The incident has already occurred and is documented with video evidence, showing realized risk and harm potential. Therefore, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla opens Robotaxi access to everyone -- but there's one catch

2025-11-18
TESLARATI
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving AI) in a real-world transportation service. The deployment of Robotaxi services with driverless capabilities directly involves AI system use. While no specific harm is reported in the article, the operation of autonomous vehicles in public spaces inherently carries risks of injury or harm to people or property if the AI malfunctions or makes errors. The article mentions ongoing regulatory constraints and safety measures, indicating awareness of potential risks. Since the AI system's use could plausibly lead to harm (e.g., accidents, injuries) if failures occur, this qualifies as an AI Hazard. There is no indication that harm has yet occurred, so it is not an AI Incident. The article is not merely complementary information because it reports the opening of the service to the public, which increases exposure to potential AI-related risks.
Thumbnail Image

Tesla Robotaxi Safety Monitor seems to doze off during Bay Area ride

2025-11-18
TESLARATI
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi is an AI system providing autonomous ride-hailing services. The safety monitor's repeated dozing off during the ride, despite AI alerts, directly compromises passenger safety and the safe operation of the vehicle. Although no accident or injury occurred, the event demonstrates a failure in the AI system's use and oversight, creating a direct safety hazard. The AI system's role in alerting the driver and the driver's failure to respond properly links the AI system's use to a significant safety risk. Therefore, this event meets the criteria for an AI Incident due to indirect harm to passenger safety and the malfunctioning human-AI interaction in the system's operation.
Thumbnail Image

Reckless Tesla Robotaxi Safety Driver Keeps Falling Asleep in San Francisco Ride Despite Alarms: 'I Do Not Trust Those'

2025-11-18
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The Tesla Robotaxi is an AI system for autonomous driving that still depends on a human safety driver to intervene if necessary. The safety driver's repeated falling asleep during rides, despite alarms, constitutes a malfunction in the human supervision component critical to the AI system's safe operation. This has directly endangered passenger safety, fulfilling the harm criterion (a) injury or harm to health. The incident is not merely a potential hazard but an actual safety failure with direct risk to people, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Robotaxi Accidents: Austin Details Hidden? - News Directory 3

2025-11-18
News Directory 3
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi program involves autonomous driving AI systems. The reported accidents, including collisions with a motorcycle and another vehicle, imply direct harm or risk of harm to people, fulfilling the criteria for injury or harm to persons. The AI system's use is central to these incidents. The concealment of accident details does not negate the occurrence of harm but adds to the severity of the incident. Therefore, this event is classified as an AI Incident.