Tesla Faces Consumer Backlash Over Unfulfilled Full Self-Driving AI Promises

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla customers report being denied refunds for the Full Self-Driving (FSD) AI feature, even when vehicles lack necessary hardware, leading to calls for a class-action lawsuit. The AI system's failure to deliver promised autonomous capabilities has resulted in consumer harm and eroded trust in AI-driven transportation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla Full Self-Driving feature is an AI system designed to provide autonomous driving capabilities. The event involves the use and failure of this AI system to deliver promised functionality, leading to consumer harm through false advertising and denial of refunds. This constitutes a violation of consumer rights and harms community trust in AI-enabled transportation technology. The class-action lawsuit and arbitration outcomes confirm that harm has materialized due to the AI system's shortcomings. Hence, the event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesConsumer products

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychological

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

'This is mission critical': Inside Tesla's battle to get Full Self-Driving approved in Europe

2025-09-03
Business Insider
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Tesla's Full Self-Driving software) and discusses its development and use, specifically the regulatory approval process in Europe. However, there is no description of any direct or indirect harm caused by the AI system, nor any credible indication that harm is imminent or plausible in the near future based on the information provided. The focus is on regulatory interactions, testing permissions, and company strategy, which fits the definition of Complementary Information. The mention of an unspecified "incident" is too vague to classify as an AI Incident or Hazard without further details. Hence, the article does not meet the criteria for AI Incident or AI Hazard but provides valuable context and updates relevant to AI governance and deployment.
Thumbnail Image

Tesla starts rolling out FSD in Australia, but there's huge risk

2025-09-02
The Financial Express
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system for autonomous driving. The near-crash video shows the system almost causing an accident, which was only avoided by human intervention. This indicates a malfunction or failure in the AI system's operation, posing a credible risk of harm to drivers or others. The warnings from authorities about driver complacency and accountability further highlight the potential for harm. Since no actual harm has yet occurred but the risk is credible and demonstrated, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Tesla Revamps FSD with $99 Monthly Plans and Free Trials

2025-09-02
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system involved in autonomous driving assistance. However, the article does not describe any actual harm, injury, or violation caused by the AI system's use or malfunction. It discusses regulatory investigations and consumer concerns about safety, which indicate potential risks but no confirmed incidents. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard. It is primarily an update on product strategy and regulatory context, which fits the definition of Complementary Information as it provides context and ongoing assessment of AI developments and responses without reporting new harm or credible imminent risk.
Thumbnail Image

Tesla customer alleges Elon Musk is avoiding responsibility concerning full self-driving feature: 'Time for a class-action lawsuit'

2025-09-03
The Cool Down
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving feature is an AI system designed to provide autonomous driving capabilities. The event involves the use and failure of this AI system to deliver promised functionality, leading to consumer harm through false advertising and denial of refunds. This constitutes a violation of consumer rights and harms community trust in AI-enabled transportation technology. The class-action lawsuit and arbitration outcomes confirm that harm has materialized due to the AI system's shortcomings. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Tesla owner vents frustrations over vehicle's unsettling behavior in self-driving mode: 'I assume there's nothing I can do'

2025-09-04
The Cool Down
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving mode is an AI system that autonomously controls vehicle navigation and speed. The described erratic behavior and inability to maintain smooth driving indicate malfunction or suboptimal performance during use. This has directly led to driver frustration and potential safety risks, which qualifies as harm to persons. The mention of a class action lawsuit for misleading advertising further supports the presence of rights violations related to the AI system's deployment. Therefore, this event meets the criteria for an AI Incident due to the AI system's malfunction and its direct or indirect contribution to harm.
Thumbnail Image

Tesla Model S/X Owners Surpass 50% FSD Adoption Rate

2025-09-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The article discusses Tesla's FSD software, which qualifies as an AI system due to its autonomous driving capabilities. However, the content focuses on adoption statistics, market strategies, regulatory environment, and customer sentiment without reporting any realized harm or incidents linked to the AI system's malfunction or misuse. There is no mention of accidents, injuries, rights violations, or other harms caused by the AI system. Therefore, the article serves as complementary information that enhances understanding of the AI ecosystem and its evolution rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Paul Mitchell Professional Haircare: Pioneering Salon Innovations and Sustainable Beauty

2025-09-04
Bangla news
Why's our monitor labelling this an incident or hazard?
Tesla's FSD software is an AI system enabling advanced driver-assistance functions. The regulatory approval follows a federal safety review addressing safety concerns, indicating prior risks of harm. The update includes features to prevent misuse and ensure driver attention, directly linked to preventing injury or harm. The system's deployment affects vehicle operation in real-world environments, where failures or misuse could cause accidents. Since the event involves the use and update of an AI system with direct links to safety and harm, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Warning as Tesla's new $10,000 self-driving mode hits Aussie roads: 'There is no excuse'

2025-09-05
Yahoo!7 News
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving feature is an AI system that controls vehicle navigation and driving tasks. The article details specific instances where the AI system malfunctioned or made errors that nearly caused accidents, such as directing the car into oncoming traffic and cutting off a motorcyclist. These are direct consequences of the AI system's use and represent safety hazards with potential for injury or harm. Although no actual injury or crash is reported, the close calls and system failures constitute an AI Incident because the AI system's malfunction has directly led to significant safety risks. The article also discusses the legal responsibility of drivers, emphasizing that the AI system is not yet fully autonomous and requires human supervision. This context supports the assessment that the AI system's current deployment has caused incidents of harm or near-harm, qualifying it as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy

2025-09-05
Electrek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) whose development and use have directly led to harm in the form of misleading customers and false advertising. The system was sold with promises of unsupervised autonomy that have not materialized, and Tesla has now legally redefined the system to avoid fulfilling those promises. This constitutes a violation of consumer rights and possibly contractual obligations, which fits the definition of harm under violations of rights. The AI system's role is pivotal as the harm arises from the system's capabilities and the promises made about them. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla switches on full self-driving for Australia - but there's a catch | Region Canberra

2025-09-06
Region Canberra
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Tesla's Full Self-Driving (Supervised)) actively operating on public roads, performing complex driving tasks autonomously but under human supervision. The system has already been tested with some unsafe behaviors observed, such as not yielding to a motorbike and attempting a wrong turn, which are direct safety risks. Although the system requires driver supervision, these incidents demonstrate that the AI's malfunction or erroneous decision-making has directly led to potentially harmful situations. This fits the definition of an AI Incident because the AI system's use has directly led to circumstances that could cause injury or harm to people. The article also discusses regulatory frameworks and the shift of responsibility to companies, underscoring the significance of the AI system's role in safety-critical contexts. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Musk's self-driving promises take a hit as Tesla reveals "full autonomy" is still out of reach for current cars

2025-09-10
Hindustan Times
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system involved in autonomous driving assistance. The article focuses on Tesla's updated definition and regulatory scrutiny, reflecting the current limitations of the system and clarifying expectations. There is no report of an incident causing harm or a hazard with plausible future harm. The regulatory case and criticism are governance responses to the AI system's marketing and capabilities. Hence, the event is Complementary Information, not an Incident or Hazard.
Thumbnail Image

Tesla Is Dropping the Dream of Human-Free Self-Driving Cars

2025-09-09
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) and its use, but there is no indication that the AI system has directly or indirectly caused harm, nor that it poses a plausible risk of harm in the future as described. The article focuses on Tesla's redefinition and admission about the system's capabilities, which is more about corporate communication and product positioning than an incident or hazard. Therefore, this is best classified as Complementary Information, providing context and updates about the AI system's status and public understanding, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Tesla Admits That Its Cars May Never Fully Drive Themselves

2025-09-09
Futurism
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system involved in vehicle operation. The article references deadly crashes linked to the system, indicating harm to persons. The company's redefinition of FSD to a vague standard and legal scrutiny for misleading advertising show the AI system's use has led to violations of regulatory frameworks and safety harms. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly caused harm and legal issues. The article does not merely discuss potential future harm or general AI developments but focuses on realized harms and consequences related to the AI system's deployment and marketing.
Thumbnail Image

Tesla Advances FSD with V13 Updates and Global Expansion

2025-09-09
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system involved in autonomous driving. The article discusses its development, use, regulatory investigations, and global expansion. However, it does not describe any actual harm or incidents caused by the AI system, nor does it present a credible imminent risk of harm. The regulatory scrutiny and investigations indicate ongoing governance responses to potential safety issues, which fits the definition of Complementary Information. The article mainly provides updates on the AI system's evolution and the surrounding regulatory environment rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Tesla changes meaning of 'Full Self - Driving', gives up on promise of ...

2025-09-09
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to provide autonomous driving capabilities. The company's change in the definition and abandonment of the original promise directly affects consumers who purchased the feature expecting unsupervised autonomy, thus causing harm through deception and financial loss. This harm relates to violation of consumer rights and contractual obligations, which falls under violations of applicable law protecting fundamental rights. Although no physical harm or accident is reported, the direct impact on consumers' rights and financial interests qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk Unveils Tesla FSD V14: 10x Parameters for Sentient Autonomy by 2025

2025-09-10
WebProNews
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and upcoming deployment of an advanced AI system for autonomous driving, highlighting its potential to improve safety and autonomy. However, it does not describe any actual harm, malfunction, or incident caused by the AI system. The discussion of ethical concerns and regulatory skepticism pertains to plausible future risks rather than realized harm. Therefore, this event qualifies as an AI Hazard because the AI system's deployment could plausibly lead to incidents or harms in the future, but no such incidents have yet occurred or been reported.
Thumbnail Image

Tesla's self-driving mode is coming to Australia amid controversy - but it won't create true driverless cars

2025-09-11
The Conversation
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview of Tesla's FSD technology, its legal status, safety concerns, and regulatory responses. It references past and ongoing issues (e.g., phantom braking lawsuits, investigations) but does not report a new AI Incident or a new AI Hazard event. The focus is on the broader context, controversies, and governance challenges, making it Complementary Information rather than a direct report of an AI Incident or Hazard.
Thumbnail Image

Tesla owner admits to driving drunk on Full Self-Driving, proving Tesla needs to do more

2025-09-11
Electrek
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI-based driver assistance system requiring active driver supervision. The owner's admission to driving drunk while using FSD demonstrates misuse of the AI system, which has previously resulted in fatal accidents. The AI system's role is pivotal because it enables the driver to attempt to delegate driving tasks while intoxicated, leading to direct or indirect harm (injury or death). The event also highlights Tesla's inadequate communication about the system's limitations, contributing to the harm. Thus, this is an AI Incident involving harm to persons due to misuse of an AI system.
Thumbnail Image

Tesla hacker finds lifesaving FSD suggestions in 2025.32.3

2025-09-11
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article describes features of an AI system (Tesla's FSD) designed to assist drivers and reduce risks associated with distracted or tired driving. However, it does not report any actual harm or incident caused by the AI system, nor does it describe a plausible imminent harm event. Instead, it provides information about ongoing development and potential future capabilities, which could improve safety. Therefore, this is complementary information about AI system development and safety features rather than an incident or hazard.
Thumbnail Image

Tesla's 'full-self driving' mode is coming to Australia, but is it even legal yet? - Switzer Daily

2025-09-11
Switzer Daily
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Tesla's FSD) and discusses its use and regulatory status. It mentions potential safety risks and ongoing legal actions related to the system's behavior (phantom braking), but does not describe a specific event where the AI system directly or indirectly caused harm. The concerns and legal debates indicate potential risks but no confirmed harm has occurred as per the article. Therefore, this is not an AI Incident or AI Hazard. Instead, the article provides contextual and regulatory information about the AI system's deployment and safety considerations, fitting the definition of Complementary Information.