Tesla FSD Beta V12.3 Praised but Faces Update Failures and Trust Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's FSD Beta V12.3, an AI-driven autonomous driving system, is praised for its human-like performance but faces high update failure rates on some Hardware-4 vehicles. Experts warn that increased user trust in the system, despite potential malfunctions, could pose safety risks if users over-rely on the AI.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla FSD Beta is an AI system controlling autonomous driving functions. The reported event is a malfunction in the software update process, which is part of the AI system's use and maintenance. Although the update failure could potentially lead to safety risks if the system is not properly updated or functioning, the article does not mention any actual incidents or harms resulting from this failure. The problem is currently being investigated and may be fixed by future updates or hardware replacement. Since no harm has occurred yet but there is a plausible risk if unresolved, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)

Severity
AI hazard

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla FSD Beta V12.3 Update Fails on Many Hardware-4 Vehicles, the Cause Is Not Yet Known

2024-03-19
autoevolution
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system controlling autonomous driving functions. The reported event is a malfunction in the software update process, which is part of the AI system's use and maintenance. Although the update failure could potentially lead to safety risks if the system is not properly updated or functioning, the article does not mention any actual incidents or harms resulting from this failure. The problem is currently being investigated and may be fixed by future updates or hardware replacement. Since no harm has occurred yet but there is a plausible risk if unresolved, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Tesla Is Confident FSD Beta V12 Hit a Breakthrough, It's Too Good for Your Own Safety

2024-03-20
autoevolution
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD Beta V12.3) whose use could plausibly lead to harm (injury or death) because users might overtrust the system and fail to intervene timely in emergencies. Although no specific incident of harm is reported, the article warns of a credible risk of accidents due to human factors interacting with the AI system's capabilities. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving injury or harm to persons.
Thumbnail Image

Tesla no longer AI training compute constrained: Elon Musk

2024-03-22
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article focuses on a positive update regarding Tesla's AI training capabilities, specifically the removal of compute constraints that could accelerate AI system improvements. There is no mention of any harm, malfunction, or violation caused by the AI systems, nor any plausible risk of harm described. The content is informational about AI development progress and potential future benefits, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

BREAKTHROUGH? Tesla's FSD Beta V12.3 Promises "Human Like" Performance

2024-03-20
AutoSpies.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta V12.3 is an AI system used for autonomous driving. The article does not report any accidents, injuries, or violations caused by the system so far, but it warns about the dangers of users blindly trusting the system, which could plausibly lead to accidents or harm in the future. Therefore, this event represents a plausible risk of harm stemming from the AI system's use, qualifying it as an AI Hazard rather than an Incident or Complementary Information.