Safety Concerns Over Tesla's Self-Driving Software in Ride-Hailing Services

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A self-driving Tesla used for Uber collided with an SUV in Las Vegas, raising safety concerns about autonomous ride-hailing services. Additionally, Tesla's 'Full Self-Driving' software in a Cybertruck attempted to drive onto a median, highlighting potential malfunctions in AI systems that could lead to harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

Tesla’s FSD is an AI-driven driver-assist system whose malfunction (failure to register an SUV in a blind spot) directly contributed to a collision with minor injuries and a totaled car. The incident demonstrates realized harm from an AI system in use, making this an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)Economic/PropertyReputational

Severity
AI incident

Business function:
Monitoring and quality controlOther

AI system task:
Recognition/object detectionForecasting/predictionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Uber, Lyft drivers use Teslas as makeshift robotaxis, raising safety concerns

2024-10-03
ThePrint
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD is an AI-driven driver-assist system whose malfunction (failure to register an SUV in a blind spot) directly contributed to a collision with minor injuries and a totaled car. The incident demonstrates realized harm from an AI system in use, making this an AI Incident.
Thumbnail Image

Why Uber and Lyft drivers are using risky DIY Tesla robotaxis

2024-10-03
Fast Company
Why's our monitor labelling this an incident or hazard?
The article describes a realized accident involving Tesla’s FSD autonomous-driving AI and a related federal inquiry, indicating direct harm (property and potential personal injury) caused by the AI system’s malfunction or misuse. This meets the criteria for an AI Incident.
Thumbnail Image

Cybertruck Gets FSD, Tries to Drive Onto Median in the Middle of Sunset Boulevard

2024-09-30
Futurism
Why's our monitor labelling this an incident or hazard?
This event involves an AI system (Tesla’s FSD) malfunctioning in real-world use and nearly causing harm, but with no actual damage or injury. It therefore constitutes a near-miss scenario—an AI Hazard—rather than a realized AI Incident.
Thumbnail Image

Safety concerns rise as Uber, Lyft drivers use Teslas as robotaxis | Honolulu Star-Advertiser

2024-10-03
Honolulu Star Advertiser
Why's our monitor labelling this an incident or hazard?
The article describes an actual collision caused by Tesla’s FSD driver-assist AI misreading the road and not braking in time. This malfunction directly led to harm (minor injuries and vehicle damage). As this is an AI system error causing realized physical harm, it qualifies as an AI Incident.
Thumbnail Image

Uber, Lyft drivers use Teslas as makeshift robotaxis, raising safety concerns

2024-10-03
The Economic Times
Why's our monitor labelling this an incident or hazard?
The event describes a real-world crash directly caused by the malfunction of Tesla’s FSD AI system while in use for ride-hailing, leading to harm (injuries and property damage). This fits the definition of an AI Incident, as the AI’s failure directly led to physical harm.
Thumbnail Image

Uber, Lyft Drivers Use Teslas as Makeshift Robotaxis, Raising Safety Concerns

2024-10-03
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
Tesla’s Full Self-Driving software is an AI system performing real-time steering, acceleration, and braking. In this case, the FSD system failed to register an SUV emerging from a blind spot, directly leading to a crash and injuries. This realized harm caused by the AI’s malfunction or performance error classifies the event as an AI Incident.
Thumbnail Image

Insight: Uber, Lyft drivers use Teslas as makeshift robotaxis, raising safety concerns

2024-10-03
Reuters
Why's our monitor labelling this an incident or hazard?
The incident involves active use of an AI system (Tesla FSD) whose failure to register and react to a crossing vehicle directly led to a collision and injuries. This is a realized harm caused by the AI system’s malfunction during ride‐hailing operations, fitting the definition of an AI Incident.
Thumbnail Image

Uber, Lyft drivers use Teslas as makeshift robotaxis, raising safety concerns - ET Auto

2024-10-04
ETAuto.com
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD is an AI-based driver-assist system. The collision represents a realized harm (property damage and potential injury) directly tied to the AI system’s use. This constitutes an AI Incident.
Thumbnail Image

Uber, Lyft drivers use Teslas as makeshift robotaxis, raising...

2024-10-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Tesla’s Full Self-Driving software is an AI system whose malfunction (failure to slow or register an SUV) directly caused a collision and injury. The harm is realized, stemming from the system’s use in a ride-hail context without appropriate regulation or oversight.
Thumbnail Image

Uber, Lyft drivers use Teslas as makeshift robotaxis, raising safety concerns

2024-10-03
Aol
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD is an AI-driven driver-assist system. The April crash occurred because the software did not register another vehicle, forcing the driver to intervene at the last moment. The event caused physical harm and property damage, fulfilling the criteria for an AI Incident.
Thumbnail Image

US Uber drivers are using Teslas as makeshift robotaxis, raising safety concerns

2024-10-04
https://auto.hindustantimes.com
Why's our monitor labelling this an incident or hazard?
An AI system (Tesla FSD) was actively in use, malfunctioned by not detecting another vehicle, and directly led to a traffic collision causing physical injury and property damage. This is a realized harm from an AI system’s use and qualifies as an AI Incident.
Thumbnail Image

Tesla Semi with sensor rig spotted potentially ground truth calibrating for FSD

2024-10-01
TESLARATI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) in development and testing phases, with sensor calibration activities observed. There is no indication that the AI system has caused or contributed to any harm, nor is there a credible risk of imminent harm described. The article is primarily about ongoing development and community anticipation, which fits the category of Complementary Information as it provides context and updates on AI system progress without reporting an incident or hazard.
Thumbnail Image

Tesla Beats The Clock With Early Access To Cybertruck FSD 12.5.5, As Early Videos Show Promise

2024-10-01
Forbes
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The article provides evidence of the system's malfunction or incomplete performance, such as failing to recognize a flashing speed limit sign and veering toward a median strip, requiring driver intervention. These malfunctions could directly or indirectly lead to injury or harm to persons, fulfilling the criteria for an AI Incident. Although no actual accidents are reported, the system's failure to act correctly and the need for human override demonstrate realized risks and harms associated with the AI system's use. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Clears FSD Suit, Shares Up Ahead of Key Updates: Buy TSLA Now?

2024-10-01
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Tesla's FSD) but does not describe any incident or hazard involving harm or plausible harm caused by the AI system. The lawsuit was dismissed, and no harm was found to have resulted from Tesla's statements or FSD technology. The article focuses on business, legal, and market updates related to Tesla's AI and autonomous driving efforts, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Tesla's Full Self-Driving update promises smoother lane changes and more decisive action, as it speeds towards a driverless future

2024-10-01
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Tesla's FSD) and discusses its development and use. However, it does not describe any actual harm or incident caused by the AI system, nor does it report a near-miss or credible risk event that would qualify as an AI Hazard. Instead, it provides updates on the AI system's capabilities, progress, and community-collected safety data, which fits the definition of Complementary Information. The article enhances understanding of the AI ecosystem and ongoing safety challenges but does not document a new incident or hazard.
Thumbnail Image

Tesla's Cybertruck gets Supervised Full Self-Driving | TechCrunch

2024-09-30
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Tesla's FSD) in real-world driving conditions. The system's malfunction or limitation (needing driver intervention to avoid driving into a median) has directly led to a safety-critical situation, though no harm occurred. Since the AI system's use has already caused a near-miss incident requiring human intervention to prevent harm, this qualifies as an AI Incident due to the direct involvement of the AI system in a safety-related event with potential for injury or harm.
Thumbnail Image

Forget Waymo, some Uber and Lyft drivers exploit this Tesla tech

2024-10-03
TheStreet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's FSD AI system being used by rideshare drivers, which is an AI system by definition. The incident of the April 10 crash caused by the FSD system's failure to detect another vehicle directly led to harm (minor injuries and vehicle damage). The drivers' reliance on the AI system despite its limitations and the lack of regulatory oversight for such use further supports the classification as an AI Incident. The harm is realized and directly linked to the AI system's malfunction and use, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla's Full Self-Driving tech is now on a small number of Cybertrucks

2024-09-30
Pocket-lint
Why's our monitor labelling this an incident or hazard?
The article describes the deployment of an AI system (Tesla's Full Self-Driving software) and discusses its current capabilities and limitations, including safety concerns. However, it does not report any actual harm, accident, or violation caused by the AI system. The information is about ongoing development and rollout, with potential future risks implied but no realized harm. Therefore, this is Complementary Information, as it provides context and updates on an AI system's status and safety considerations without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Tesla Cybertrucks Are Getting Full Self-Driving Now, So Good Luck Out There

2024-10-01
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system involved in real-time autonomous vehicle navigation. The near-miss incident where the truck almost drove into a median due to the AI's decision-making constitutes a malfunction leading to plausible physical harm. Although no injury occurred, the event demonstrates a direct risk of harm caused by the AI system's malfunction during operation. Therefore, this qualifies as an AI Incident under the definition of harm to persons resulting from AI system malfunction.
Thumbnail Image

Watch your mirrors: Tesla Cybertrucks gain Full Self Driving

2024-09-30
TheRegister.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system enabling autonomous driving. The article reports that during extensive real-world testing, drivers had to intervene frequently (once every 13 miles) due to sudden and dangerous errors by the AI system. This indicates that the AI system's use has directly led to safety risks and potential harm to people, fulfilling the criteria for an AI Incident. The harm is not just potential but ongoing, as the system is in use and errors have been observed that could cause accidents or fatalities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Forget Waymo, some Uber and Lyft drivers exploit this Tesla tech

2024-10-03
Post and Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's FSD AI system being used in rideshare vehicles, which directly led to a crash causing injuries and property damage. The AI system's failure to detect an obstacle was a contributing factor, fulfilling the criteria for an AI Incident due to harm to persons and property. The involvement is through the use and malfunction of the AI system. The incident is not hypothetical or potential but has already occurred, and the article details the consequences and risks associated with this AI use in a real-world context.
Thumbnail Image

Some Tesla Cybertrucks Are Getting FSD 'Supervised'

2024-10-01
Autoweek
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Tesla's FSD) and its use, but it does not report any actual harm or incident resulting from its deployment. The system is explicitly described as requiring driver supervision, and no accidents or malfunctions causing harm are mentioned. The discussion about regulatory challenges and future autonomous capabilities is forward-looking but does not present a credible immediate risk or incident. Hence, it fits the definition of Complementary Information, as it provides context and updates on AI system deployment and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

Forget Waymo, some Uber and Lyft drivers exploit this Tesla tech

2024-10-03
MyrtleBeachOnline
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's FSD AI system being used by rideshare drivers, which is an AI system by definition. The incident of the April 10 crash caused by the FSD system's failure to detect a vehicle directly led to harm (minor injuries and vehicle damage). The drivers' reliance on the AI system despite its limitations and the lack of regulatory oversight further implicate the AI system's use in causing harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction and use have directly led to injury and property damage, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Las Vegas Crash Spotlights Tesla's Autonomous Driving Concerns

2024-10-03
Finimize
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving software is an AI system designed to autonomously control vehicles. The crash in Las Vegas was caused by the AI's failure to detect another vehicle, which directly led to a traffic accident. This constitutes harm to persons or property and highlights safety risks associated with the AI system's malfunction. The article discusses the incident's implications for regulation and safety, confirming the AI system's role in causing harm. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Uber- und Lyft-Fahrer nutzen Teslas als behelfsmäßige Robotertaxis und wecken damit Sicherheitsbedenken

2024-10-03
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (Tesla FSD) that directly led to physical harm (injuries) and property damage. This satisfies the criteria for an AI Incident, as the autonomous software’s failure was a pivotal factor in the collision.
Thumbnail Image

"Gelegentlich gefährlich unfähig": Experten müssen im Tesla-"Autopilot"-Test dutzende Male eingreifen

2024-10-03
Merkur.de
Why's our monitor labelling this an incident or hazard?
The article describes real-world malfunctions of an AI driving system (Tesla FSD) that directly created dangerous situations and required manual interventions to prevent harm. This constitutes an AI Incident because the system’s failures led to potential injury and safety risks.
Thumbnail Image

"Scheinbare Unfehlbarkeit": Was Teslas "Autopilot" laut Experten so gefährlich macht

2024-10-01
extratipp.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The article reports multiple instances where the AI system malfunctioned or behaved unpredictably, causing dangerous situations that required human intervention to avoid accidents. The system's misleading naming leads to driver complacency, increasing the risk of harm. These factors constitute direct involvement of an AI system in events that have caused or could cause injury or harm to people, fitting the definition of an AI Incident.
Thumbnail Image

"Gelegentlich gefährlich unfähig": Experten müssen im Tesla-"Autopilot"-Test dutzende Male eingreifen

2024-10-03
extratipp.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The test report documents multiple instances where the AI system failed, causing dangerous situations that required human drivers to intervene to prevent accidents. These failures directly relate to the AI system's malfunction and pose clear risks of injury or harm to persons, fulfilling the criteria for an AI Incident. The article describes actual use and malfunction leading to safety hazards, not just potential risks or general commentary, so it is not merely a hazard or complementary information.
Thumbnail Image

"Scheinbare Unfehlbarkeit": Was Teslas "Autopilot" laut Experten so gefährlich macht

2024-10-01
az-online.de
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The article reports multiple real-world malfunctions during testing that created hazardous situations requiring human intervention to avoid accidents. These malfunctions directly risk injury or harm to persons, fulfilling the criteria for an AI Incident. The misleading naming of the system also contributes to misuse and overreliance, which is an indirect factor in the harm. Since actual dangerous incidents occurred or were narrowly avoided due to AI malfunction, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Tesla führt seine vollautonome Fahrtechnologie endlich in einige Cybertrucks ein

2024-09-30
Quartz auf Deutsch
Why's our monitor labelling this an incident or hazard?
The article describes the introduction of Tesla's FSD AI system into Cybertrucks, an AI system that controls autonomous driving functions. The system's involvement in over 200 accidents and 29 deaths has been documented previously, with ongoing investigations by safety authorities. However, this article does not report a new accident or harm directly caused by the Cybertruck FSD deployment. Instead, it updates on the rollout and testing status, including performance metrics and regulatory context. Since no new harm or plausible imminent harm is reported here, and the focus is on deployment progress and related context, the event fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Teslas "vollautonomes Fahren" kann eigentlich nicht so weit fahren

2024-09-30
Quartz auf Deutsch
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to autonomously navigate traffic. The study documents multiple instances where the system failed to operate safely, requiring human intervention to prevent accidents. The described behaviors (running red lights, driving into oncoming lanes) constitute direct safety hazards and have been linked to accidents and fatalities in other reports. Therefore, the AI system's malfunction has directly led to harm or risk of harm, fitting the definition of an AI Incident involving injury or harm to persons.
Thumbnail Image

特斯拉在马斯克自动驾驶营销引发的投资者诉讼中胜诉

2024-09-30
东方财富网
Why's our monitor labelling this an incident or hazard?
The article discusses a legal case about Tesla's marketing of its AI-based autonomous driving system, focusing on whether statements were misleading. There is no indication that the AI system caused harm or that harm was plausibly imminent. The event is about governance and legal proceedings responding to AI-related claims, which is complementary information enhancing understanding of AI ecosystem impacts and responses, rather than a new incident or hazard.
Thumbnail Image

"人类爱好者"马斯克,锚定人工智能│AI 21人-证券之星

2024-10-04
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article discusses multiple AI systems and their development and deployment but does not describe any event where these AI systems have directly or indirectly caused harm (physical, legal, societal, or environmental). Nor does it identify any credible or plausible future harm stemming from these AI systems. Instead, it serves as a comprehensive update on Musk's AI ecosystem and ambitions, which fits the definition of Complementary Information. There is no indication of an AI Incident or AI Hazard in the content provided.
Thumbnail Image

穿透来看,埃隆・马斯克整体产业布局的真正核心,在于人工智能,也是最大的发展驱动力。近几年,人们都在为最新的AI技术进展而屏住呼吸,而马斯克却是那个最快也最大胆将人工智能进行商用的人。

2024-10-04
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions multiple AI systems and their commercial use, such as Neuralink's brain-machine interfaces, Tesla's autonomous driving technologies, and xAI's language models, confirming AI system involvement. However, it does not describe any realized harm (injury, rights violations, disruption, or other significant harms) caused by these AI systems, nor does it indicate any credible imminent risk of such harm. The focus is on technological progress, market deployment, and strategic ambitions, which aligns with the definition of Complementary Information. There is no indication of an AI Incident or AI Hazard in the content provided.
Thumbnail Image

特斯拉在马斯克自动驾驶营销引发的投资者诉讼中胜诉

2024-09-30
新浪财经
Why's our monitor labelling this an incident or hazard?
The article discusses a legal ruling on investor lawsuits related to Tesla's marketing of its AI-based autonomous driving system. While the AI system (FSD) is central to the dispute, the event does not describe any realized harm caused by the AI system itself, nor does it describe a plausible future harm scenario. Instead, it focuses on the legal and regulatory process addressing claims of misleading marketing. This fits the definition of Complementary Information, as it informs about governance and societal responses to AI-related issues without reporting a new AI Incident or AI Hazard.
Thumbnail Image

斯克遭到特斯拉股东起诉:涉嫌夸大FSD能力

2024-10-02
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Tesla's Full Self-Driving technology) and concerns its development and use, specifically allegations of misleading statements about its capabilities. However, no direct or indirect harm caused by the AI system is reported, and the court found insufficient evidence to support claims of intentional misrepresentation. Therefore, this is not an AI Incident or AI Hazard. The main focus is on legal proceedings and corporate governance related to AI, which fits the category of Complementary Information as it provides context and updates on societal and governance responses to AI-related issues.
Thumbnail Image

海外 特斯拉股东起诉马斯克夸大FSD能力,已被法官驳回

2024-10-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) and concerns its development and public communication about its capabilities. However, the lawsuit was dismissed, and no direct or indirect harm from the AI system's use or malfunction is reported. The article focuses on the legal dispute over alleged misleading statements and the broader phenomenon of overpromising in tech, which is a governance and societal issue rather than an incident or hazard involving realized or plausible harm from the AI system. Therefore, this is best classified as Complementary Information, providing context and discussion about AI-related governance and market dynamics without describing a new AI Incident or AI Hazard.
Thumbnail Image

Uber使用特斯拉进行网约车载客,有司机使用FSD功能引发安全担忧_手机网易网

2024-10-04
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned: Tesla's Full-Self Driving (FSD) software, which is an AI system for autonomous driving. The accident occurred while the AI system was active and failed to act appropriately (not reducing speed timely), contributing indirectly to the collision and resulting in harm (vehicle damage and minor injuries). The article also discusses regulatory and safety concerns related to the use of this AI system in ride-hailing, reinforcing the incident's significance. Hence, it meets the criteria for an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

特斯拉股东起诉马斯克夸大FSD能力和发展进度,被法官驳回

2024-10-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article discusses a legal case concerning alleged exaggeration of an AI system's capabilities (Tesla's FSD) and its impact on investor decisions. While the AI system is central to the dispute, no actual harm (such as injury, rights violations, or operational disruption) caused by the AI system is described. The court's dismissal indicates no proven incident occurred. The focus is on the legal and societal response to claims about AI, which fits the definition of Complementary Information. There is no indication of plausible future harm from the AI system in this context, nor is there a report of an AI Incident. Hence, the classification is Complimentary Info.