California Sues Tesla Over Misleading AI Autopilot Claims, Suspends Operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

California's DMV has suspended Tesla's manufacturing and sales for at least 30 days, alleging the company misled consumers about its AI-based Autopilot and Full Self-Driving features. The lawsuit claims Tesla's marketing falsely implied full vehicle autonomy, prompting regulatory action and potential restitution for affected customers.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla Autopilot and Full Self-Driving features are AI systems involved in autonomous vehicle operation. The legal case alleges that Tesla misled consumers about the capabilities of these AI systems, which can cause harm by creating false expectations about vehicle autonomy and potentially endangering users who may over-rely on the system. The suspension of Tesla's operations in California is a direct consequence of this issue, reflecting realized harm in terms of consumer deception and regulatory enforcement. Hence, this event meets the criteria for an AI Incident due to the AI system's use leading to violations of consumer protection and safety obligations.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafety

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla sales could be suspended in California

2025-07-22
Newsweek
Why's our monitor labelling this an incident or hazard?
The article centers on a lawsuit challenging Tesla's marketing of AI-driven semi-autonomous features as fully autonomous, which could mislead consumers. While the AI systems are involved, the event is about regulatory and legal scrutiny rather than an incident of harm caused by the AI system's malfunction or misuse. No injury, property damage, rights violation, or other harm has been reported as resulting from the AI system's operation. The event is best classified as Complementary Information because it provides important context on governance and legal responses to AI system claims and potential risks, without describing a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

Newsom suspends Tesla's operations in Cali as Musk fights back

2025-07-22
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot and Full Self-Driving features are AI systems involved in autonomous vehicle operation. The legal case alleges that Tesla misled consumers about the capabilities of these AI systems, which can cause harm by creating false expectations about vehicle autonomy and potentially endangering users who may over-rely on the system. The suspension of Tesla's operations in California is a direct consequence of this issue, reflecting realized harm in terms of consumer deception and regulatory enforcement. Hence, this event meets the criteria for an AI Incident due to the AI system's use leading to violations of consumer protection and safety obligations.
Thumbnail Image

Tesla 'False Advertising' Lawsuit Turns Heads

2025-07-22
Men's Journal
Why's our monitor labelling this an incident or hazard?
The event centers on a legal dispute over Tesla's advertising claims about its AI-based driver assistance systems. While these systems are AI systems, the article does not report any actual harm caused by their malfunction or misuse, nor does it describe a credible risk of harm that could plausibly lead to an incident. The focus is on regulatory and legal responses to alleged false advertising, which is a governance and societal response to AI-related claims. Therefore, this event fits best as Complementary Information, providing context on societal and legal reactions to AI system marketing rather than describing an AI Incident or AI Hazard.
Thumbnail Image

California Could Suspend Tesla Manufacturing Over False Advertising Claim

2025-07-23
CleanTechnica
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self Driving are AI systems involved in autonomous vehicle operation. The fatal accident caused by the Tesla in Autopilot mode, which failed to stop at a stop sign and caused death and injury, is a direct harm linked to the AI system's malfunction or limitations. Additionally, the lawsuit about false advertising concerns the use and marketing of these AI systems, which misled consumers about their capabilities, potentially contributing to misuse or overreliance. These factors meet the criteria for an AI Incident as the AI system's use and malfunction have directly led to harm to persons and violations of consumer protection laws. The ongoing legal proceedings and regulatory actions further confirm the seriousness of the incident.
Thumbnail Image

California DMV files lawsuit against Tesla over self-driving capabilities

2025-07-23
KRON4
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving features are AI systems designed to assist driving. The DMV's lawsuit centers on false advertising that may cause drivers to overtrust these AI systems, which have been linked to safety issues and recalls. This situation reflects harm or risk of harm to people due to the AI system's use and malfunction, fulfilling the criteria for an AI Incident. The lawsuit and recall indicate realized or ongoing harm rather than just potential future harm, so this is classified as an AI Incident.
Thumbnail Image

Tesla's own data confirms Autopilot safety regressed in 2025

2025-07-23
Electrek
Why's our monitor labelling this an incident or hazard?
Tesla Autopilot is an AI system for autonomous driving. The report shows that the safety metric (miles between crashes) has worsened in 2025, indicating increased risk of accidents when Autopilot is engaged. This is a direct harm to health and safety of persons using or around the system. The event stems from the use of the AI system and its performance regression, which directly leads to increased risk of injury or harm. Hence, this is an AI Incident.
Thumbnail Image

Elon Musk's Tesla Reveals Jaw-Dropping Stat on Autopilot Technology

2025-07-23
Men's Journal
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system involved in driving vehicles autonomously. The article references accidents and lawsuits related to this technology, which implies harm has occurred in some cases. However, the article does not describe a specific new incident or event where the AI system's malfunction or use directly caused harm. Instead, it provides statistical data on safety and ongoing legal challenges, which is informative but does not constitute a new AI Incident or AI Hazard. Thus, it fits best as Complementary Information, updating on the broader context of AI system use and its societal/legal implications.
Thumbnail Image

Tesla Q2 2025 vehicle safety report proves FSD is a killer safety feature

2025-07-23
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article discusses Tesla's AI-based Autopilot and FSD systems and their safety performance, which involves AI systems making real-time driving decisions. The data shows fewer crashes when these systems are active, implying positive safety outcomes rather than harm. There is no mention of any injury, property damage, rights violation, or other harm caused or potentially caused by these AI systems. The article is essentially a report on safety statistics and the benefits of AI in driving, without any indication of incidents or hazards. Therefore, this is best classified as Complementary Information, as it provides supporting data and context about AI system performance and safety, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Facing legal challenges, Tesla still claims Autopilot is safer

2025-07-24
The Sacramento Bee
Why's our monitor labelling this an incident or hazard?
The article explicitly describes fatal crashes involving Tesla's Autopilot system, an AI-based driver-assist technology, resulting in death and serious injury, which constitutes harm to persons. The lawsuits and federal trial focus on these harms caused by the AI system's use or malfunction. The regulatory actions further underscore the risks and potential for harm due to misleading claims about the system's capabilities. The AI system's development, use, and malfunction are directly linked to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

Tesla Faces Sales Pause in California Amid Autonomous Probe

2025-07-23
Ward's Communications Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Tesla's Autopilot and Full Self-Driving features) whose use has been linked to multiple crashes and fatalities, indicating harm to health and safety. The regulatory charges focus on deceptive marketing that misleads consumers, causing financial harm and potentially unsafe reliance on the AI system. The ongoing legal and administrative proceedings are responses to these realized harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly and indirectly led to harm, including injury and violation of consumer rights through misleading claims.
Thumbnail Image

Tesla faces pivotal trial after fatal crash while car was in Autopilot: 'Jury could find that Tesla acted in reckless disregard of human life'

2025-07-24
The Cool Down
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system that assists driving by making real-time decisions. The crash directly involved the use of this AI system, leading to fatal injury and harm. The event involves the use and possible malfunction or misuse of the AI system, resulting in harm to persons, which fits the definition of an AI Incident. The legal trial and regulatory scrutiny further confirm the significance of the harm caused by the AI system's involvement.
Thumbnail Image

How a fatal crash put Tesla's Autopilot on trial and raised new questions about driver responsibility

2025-07-24
MoneyControl
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system designed to assist driving by detecting obstacles and controlling the vehicle. The crash resulted in a fatality and serious injury, and the plaintiffs argue that the AI system failed to perform as intended, contributing directly to the harm. The legal case centers on the system's malfunction and its role in the incident, fulfilling the criteria for an AI Incident as the AI system's malfunction directly led to harm to persons. The event is not merely a hazard or complementary information but a concrete incident with realized harm.
Thumbnail Image

Commentary: Has Musk lied about self-driving Teslas? California says so

2025-07-24
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves Tesla's AI-based Autopilot and Full Self-Driving systems, which are AI systems designed to automate driving. The California DMV lawsuit and shareholder lawsuits allege false advertising and misleading claims about the AI system's capabilities, which have led to consumer confusion and potentially dangerous reliance on the system. There have been fatal accidents linked to the use of Tesla's Autopilot, indicating direct harm to health. The legal actions and regulatory responses further confirm the significance of the harm and the AI system's pivotal role. Thus, this is an AI Incident due to realized harm (injuries and fatalities) and violations of legal obligations related to truthful advertising and safety.
Thumbnail Image

List of places where Tesla faces legal action over self-driving cars

2025-07-24
Newsweek
Why's our monitor labelling this an incident or hazard?
Tesla's self-driving technology is an AI system that makes real-time driving decisions. The reported harms include a fatal crash and dangerous sudden braking events linked to the AI system's malfunction or overreliance due to misleading marketing. These harms fall under injury or harm to persons and violations of legal obligations. The legal scrutiny and lawsuits directly relate to the AI system's use and its consequences, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Michael Hiltzik: Has Musk lied about self-driving Teslas? California says so

2025-07-24
ArcaMax
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving features are AI systems designed to provide autonomous driving capabilities. The California DMV lawsuit alleges false advertising that misled consumers about the cars' autonomous capabilities, which is a violation of consumer rights and potentially a breach of legal obligations. Additionally, there have been accidents involving Tesla vehicles operating with Autopilot engaged, resulting in fatalities and injuries, which are direct harms to people. The ongoing legal cases and settlements further confirm harm linked to the AI system's use. The article details realized harms and legal consequences stemming from the AI system's use and marketing, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Faces One-Month Sales Ban in California Over Autopilot Claims

2025-07-24
WebProNews
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and FSD are AI systems designed to assist or automate driving. The DMV's lawsuit centers on misleading claims about these AI systems' capabilities, which have been linked to accidents causing injury and death. This constitutes harm to persons indirectly caused by the AI system's use and the company's misrepresentation. The event describes realized harm and regulatory consequences, fitting the definition of an AI Incident rather than a hazard or complementary information. The focus is on the AI system's use and its role in harm and legal violations, not just potential future harm or general AI news.
Thumbnail Image

Michael Hiltzik: Has Musk lied about self-driving Teslas? California says so

2025-07-24
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
Tesla vehicles use AI systems for autonomous driving features. The lawsuit alleges that Tesla misrepresented the capabilities of these AI systems, which could lead to consumer harm through overreliance on incomplete or inaccurate autonomous driving features. Although no specific incident of physical harm is described, the false advertising about AI capabilities can indirectly lead to harm by misleading users about the safety and autonomy of the vehicles. Therefore, this event concerns the use of AI systems and their misleading claims, which is a violation of consumer rights and potentially a breach of legal obligations. This fits the definition of an AI Incident due to the direct link between AI system use and harm through false advertising and potential safety risks.
Thumbnail Image

Tesla is about to banned from selling cars in California

2025-07-24
The Independent
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving are AI systems designed for semi-autonomous driving. The DMV's claim that Tesla's marketing misleads consumers about the capabilities of these AI systems, combined with documented incidents including a fatal crash linked to Autopilot failure, shows direct harm to people and violation of regulatory safety standards. The event describes actual harm and regulatory intervention due to AI system use and malfunction, fitting the definition of an AI Incident.
Thumbnail Image

Michael Hiltzik: Has Musk lied about self-driving Teslas? California says so

2025-07-24
Denver Gazette
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving systems are AI systems designed to perform autonomous driving functions. The article details how Tesla has misrepresented these capabilities, leading to consumer misunderstanding and reliance on systems that cannot perform as advertised. This misrepresentation has been linked to accidents causing injury and death, constituting harm to persons. The legal actions and regulatory responses underscore the violation of legal obligations and consumer rights. The AI system's malfunction or overstatement of capabilities has directly or indirectly led to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Robotaxis Only Go 20 Miles/Day. Meanwhile Where's Mobileye?

2025-07-25
Forbes
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxis are AI systems operating autonomously or semi-autonomously. The article reports actual safety incidents, including a collision and unsafe maneuvers, indicating direct harm or risk to people and property. These incidents stem from the use and deployment of Tesla's AI system. The article also critiques Tesla's safety data reporting, implying potential underreporting of harm. The Mobileye information is about future plans and readiness, not current harm. Thus, the primary classification is AI Incident due to realized safety-related harms linked to Tesla's AI system.
Thumbnail Image

Tesla autopilot on trial: DMV seeks to suspend the company from doing business in California

2025-07-21
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article describes a legal dispute over Tesla's marketing of its AI-based autopilot and self-driving features. The AI system is explicitly involved as it is the core technology under scrutiny. Although no specific harm or incident is reported, the DMV's allegations highlight the risk that consumers might misunderstand the system's capabilities, leading to misuse and potential accidents. This fits the definition of an AI Hazard, where the development, use, or malfunction of an AI system could plausibly lead to harm. Since no actual harm has been reported yet, it is not an AI Incident. The article is not merely complementary information because it focuses on the potential for harm and regulatory action rather than just updates or responses. Therefore, the classification is AI Hazard.
Thumbnail Image

Após lançamento em Austin, Tesla mira Nevada para ampliar serviço de robotáxis | Exame

2025-07-23
Exame
Why's our monitor labelling this an incident or hazard?
Tesla's robotaxi service involves AI systems for autonomous driving, which qualifies as an AI system. However, the article does not report any realized harm or incident resulting from the use or malfunction of these AI systems. Instead, it discusses the company's plans and regulatory steps needed to expand the service. This constitutes a plausible future risk scenario where AI systems could lead to harm if not properly regulated, but no harm has yet occurred. Therefore, this event is best classified as an AI Hazard, reflecting the potential for future harm from the deployment of autonomous AI vehicles.
Thumbnail Image

Tesla é processada por acidente com piloto automático que matou jovem nos Estados Unidos

2025-07-23
O Globo
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system designed for autonomous driving. The accident resulted in a fatality, which is a direct harm to a person caused by the alleged malfunction or failure of the AI system. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to injury and death.
Thumbnail Image

Tesla pode ser proibida de vender na Califórnia; entenda

2025-07-22
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's AI-based autonomous driving systems and the regulatory accusation that Tesla misled consumers about their capabilities, which can lead to harm through misuse or overreliance. The AI system's use is central to the event, and the harm involves potential injury or harm to people due to misunderstanding the system's limits. The legal proceedings and potential penalties indicate that harm has occurred or is ongoing. Thus, this is an AI Incident involving the use and communication of AI system capabilities leading to consumer harm and regulatory action.
Thumbnail Image

Tesla pode ser proibida de vender carros na Califórnia por propaganda enganosa

2025-07-22
GARAGEM 360
Why's our monitor labelling this an incident or hazard?
Tesla's 'Autopilot' and 'Full Self-Driving' are AI systems involved in vehicle operation. The DMV's investigation and proposed suspension are due to misleading advertising about these AI systems' capabilities, which could cause users to misuse or over-rely on them, potentially leading to accidents or injuries. Although no specific incident of harm is described, the risk of harm due to misunderstanding the AI system's limitations is credible and recognized by regulatory authorities. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Tesla e o risco jurídico na Califórnia: liderança global sob ameaça

2025-07-23
AutoPapo
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and FSD are AI systems involved in autonomous driving functions. The legal action concerns alleged false advertising about the capabilities of these AI systems, which is a regulatory and legal issue. There is no indication that the AI systems have caused injury, rights violations, or other harms as defined for an AI Incident. Nor does the article describe a plausible future harm scenario from the AI systems themselves beyond the legal dispute. Therefore, this event is best classified as Complementary Information, as it provides context on governance and legal responses related to AI systems but does not describe an AI Incident or AI Hazard.
Thumbnail Image

California podría suspender fabricación y venta de automóviles Tesla en ese estado; batalla legal, la causa | El Universal

2025-07-24
El Universal
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving features are AI systems involved in vehicle operation. The DMV alleges that Tesla's marketing misleads consumers into believing the vehicles are fully autonomous, which could lead to misuse and potential harm. Although no specific harm is reported as having occurred, the misleading promotion and the risk of consumer overreliance on these AI systems plausibly could lead to injury or harm to persons. Therefore, this situation constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to harm if consumers misunderstand their capabilities and misuse them. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on a legal action directly related to AI system use and its potential risks.
Thumbnail Image

California podría suspender la fabricación y venta de automóviles Tesla en el estado

2025-07-23
infobae
Why's our monitor labelling this an incident or hazard?
Tesla's 'Autopilot' and 'Full Self-Driving' are AI systems involved in autonomous vehicle operation. The DMV's lawsuit alleges that Tesla's advertising misrepresents these AI systems' capabilities, potentially leading to misuse or overreliance by drivers, which could cause accidents or injuries. Although the article does not report actual incidents of harm, the concern about misleading claims and the potential for road safety risks constitutes a plausible future harm scenario. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and linked to the AI system's use and public perception.
Thumbnail Image

Ventas de Tesla podrían suspenderse en California: DMV presenta demanda porque los vehículos no son completamente autónomos como los promocionan

2025-07-25
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving systems are AI systems designed to perform autonomous driving tasks. The lawsuit claims that Tesla misrepresents these systems as fully autonomous when they are not, which can mislead consumers into unsafe reliance on the technology. This misrepresentation has led to legal action, indicating harm related to consumer protection and potential safety risks. The event describes an ongoing issue where the AI system's use and promotion have directly led to harm in terms of misleading consumers and possible safety hazards, fitting the definition of an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Nuevo varapalo para Elon Musk: el sorprendente motivo por el que podrían prohibir la venta de Tesla en California

2025-07-24
LaVanguardia
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving are AI systems providing assisted driving features, not full autonomy. The regulatory concern is that misleading marketing may cause users to overtrust the AI, leading to unsafe situations. Although no incident of harm is reported, the plausible risk of harm due to misunderstanding the AI system's limitations justifies classification as an AI Hazard rather than an AI Incident. The event is not merely complementary information or unrelated, as it directly concerns potential safety risks from AI system misuse.
Thumbnail Image

California podría suspender temporalmente la fabricación y venta de automóviles Tesla por esta razón

2025-07-24
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot and Full Self-Driving features are AI systems involved in autonomous vehicle operation. The regulatory complaint centers on misleading advertising that may cause users to overestimate the system's capabilities, potentially leading to unsafe use and road safety hazards. Although no specific accidents or injuries are reported in the article, the potential for harm due to misuse or misunderstanding of AI system capabilities is credible and significant. The legal action to suspend sales and manufacturing licenses is a preventive measure addressing this plausible risk. Since no realized harm is described, this does not qualify as an AI Incident but rather as an AI Hazard.
Thumbnail Image

California podría suspender fabricación y venta de automóviles Tesla

2025-07-24
Horacero
Why's our monitor labelling this an incident or hazard?
The event centers on the use and promotion of AI systems (Autopilot and FSD) in Tesla vehicles. Although no direct harm is reported, the misleading marketing could lead to misuse or overreliance on these AI systems, plausibly causing harm such as accidents or injury. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to harm due to consumer misunderstanding and misuse. It is not an AI Incident because no actual harm has been reported yet, and it is not merely complementary information or unrelated news.
Thumbnail Image

Los expertos critican el piloto automático de Tesla y ponen a Elon Musk contra las cuerdas

2025-07-26
Urban Tecno
Why's our monitor labelling this an incident or hazard?
Tesla Autopilot is an AI system providing partial autonomous driving capabilities. The article reports at least 13 fatal accidents where Autopilot contributed to the incidents, including specific cases with deaths and injuries. The involvement of the AI system in causing physical harm is direct and well documented, with regulatory investigations and legal actions underway. Therefore, this event qualifies as an AI Incident due to injury and harm to persons caused by the AI system's malfunction or misuse.