Tesla's Full Self-Driving AI Faces Crashes and Legal Action in Europe

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's Full Self-Driving (FSD) AI system, marketed as autonomous, has been involved in crashes in Europe, leading to user dissatisfaction and organized legal actions, especially in the Netherlands. Owners allege misleading claims about the technology's capabilities and seek compensation for unfulfilled promises.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla FSD system is an AI system designed for autonomous driving. The article reports that the system has been involved in crashes, indicating malfunction or failure to perform as promised, which has caused harm to users and led to legal claims. The harm includes physical safety risks and consumer rights violations due to misleading claims. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm and legal disputes.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Manufacturing

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Självkörande bilar kan testas i Askersund - kommunen öppnar för försök

2026-04-28
SVT Nyheter
Why's our monitor labelling this an incident or hazard?
Tesla's self-driving cars involve AI systems for autonomous driving. The article discusses the approval process for testing these AI systems on public roads, highlighting potential safety concerns and labor issues. Since the tests have not yet begun and no harm has been reported, the event does not qualify as an AI Incident. However, the deployment of AI-enabled autonomous vehicles on public roads could plausibly lead to harm (e.g., accidents, safety risks), making this an AI Hazard. The article focuses on the potential for future harm rather than realized harm or responses to past incidents.
Thumbnail Image

Musks misslyckande kring självkörning - Teslaägare känner sig lurade

2026-04-30
Dagens PS
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The article reports that the system has been involved in crashes, indicating malfunction or failure to perform as promised, which has caused harm to users and led to legal claims. The harm includes physical safety risks and consumer rights violations due to misleading claims. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm and legal disputes.
Thumbnail Image

Tesla får grönt ljus att testa självkörning i Sverige

2026-04-28
Teknikveckan
Why's our monitor labelling this an incident or hazard?
The event describes the authorized testing of an AI system (Tesla's Full Self-Driving) on public roads, which involves the use of AI in a real environment. However, there is no indication that any harm, injury, or violation has occurred or that the AI system malfunctioned. The testing is supervised with a safety driver, and the article focuses on the start of testing and data collection rather than any incident or hazard. While there is a plausible risk inherent in testing autonomous driving AI, the article does not report any near misses or credible warnings of imminent harm. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and development rather than an incident or hazard.