Tesla FSD Beta Malfunction Nearly Causes Crashes During Journalist's Test Drive

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Fred Lambert, editor-in-chief of Electrek, reported that Tesla's Full Self-Driving (FSD) Beta software (v11.4.7) nearly caused two high-speed crashes by aggressively steering his Model 3 toward a highway median. The AI system's malfunction posed a direct risk to driver safety, highlighting ongoing concerns about autonomous vehicle reliability.[AI generated]

Why's our monitor labelling this an incident or hazard?

Tesla's Full Self-Driving beta software is an AI system designed to autonomously control the vehicle. The journalist's report describes two near-crash events caused by an aggressive bug in the AI system, which nearly led to injury or death. This is a clear example of an AI system malfunction directly leading to harm or risk of harm to a person, fitting the definition of an AI Incident. The harm is materialized as near crashes, which are significant safety incidents.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)Physical (death)PsychologicalReputational

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Journalist claims Tesla's new self-driving software nearly caused him to crash twice

2023-09-02
The Daily Courier
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving beta software is an AI system designed to autonomously control the vehicle. The journalist's report describes two near-crash events caused by an aggressive bug in the AI system, which nearly led to injury or death. This is a clear example of an AI system malfunction directly leading to harm or risk of harm to a person, fitting the definition of an AI Incident. The harm is materialized as near crashes, which are significant safety incidents.
Thumbnail Image

Journalist claims Tesla's new self-driving software nearly caused him to crash twice

2023-09-01
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving software is an AI system that autonomously controls the vehicle. The journalist's report of the software nearly causing crashes due to a bug indicates a malfunction during use. This malfunction directly endangered the driver's safety, constituting injury or harm to a person. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the near harm experienced.
Thumbnail Image

Tesla Says Its FSD-Equipped Vehicles Will Be Able To Drive Themselves, It's Too Confident

2023-09-03
autoevolution
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving. The article states that the system is still in beta, requires driver supervision, and can behave incorrectly at critical moments, indicating potential malfunction or misuse risks. There is no mention of actual harm occurring yet, but the system's current state and the legal challenges suggest a credible risk of future incidents involving injury or violation of safety standards. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has been reported so far.
Thumbnail Image

Journalist claims Tesla's new self-driving software nearly caused him to crash twice

2023-09-01
Post and Courier
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving software is an AI system designed to autonomously control the vehicle. The journalist's report of the system pushing the car dangerously towards the median strip twice indicates a malfunction during use, directly leading to a risk of injury. Since the AI system's malfunction nearly caused harm, this qualifies as an AI Incident under the definition of injury or harm to a person resulting from AI system malfunction.
Thumbnail Image

Tesla slashes FSD beta software price

2023-09-05
Vehicle Telematics, ADAS, Connected and Autonomous Vehicle
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta software is an AI system involved in autonomous driving. The article references a past recall due to safety risks, which is an AI Incident that occurred earlier. However, the current event is about a price reduction to encourage wider use and testing, with no new harm or plausible immediate harm described. The article also discusses regulatory concerns and the need for oversight, which are governance-related responses. Since no new harm or credible risk is reported here, the event is Complementary Information updating on the AI system's status and ecosystem context.