Leaked Data Reveals Tesla Concealed Thousands of AI-Driven Autopilot Accidents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Leaked internal data shows Tesla knowingly concealed thousands of accidents, including fatalities, caused by its AI-based Autopilot system. Despite being aware of recurring malfunctions—such as sudden acceleration and braking—Tesla continued public road testing. Regulatory investigations and lawsuits have followed, highlighting significant harm from the AI system's failures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The autonomous driving system is an AI system as it uses AI to perceive and make driving decisions. The event involves the use and malfunction of this AI system, which directly led to physical harm and fatalities, fulfilling the criteria for an AI Incident. The concealment of these incidents and the court ruling demonstrate that harm has materialized, not just potential harm. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (death)Physical (injury)

Severity
AI incident

Business function:
Research and development

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Geleakte Daten zeigen: Tesla verschwieg Hunderte Unfälle mit selbst­fahrenden Autos

2026-04-20
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it uses AI to perceive and make driving decisions. The event involves the use and malfunction of this AI system, which directly led to physical harm and fatalities, fulfilling the criteria for an AI Incident. The concealment of these incidents and the court ruling demonstrate that harm has materialized, not just potential harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Wie Tesla Unfälle verheimlichte, um seinen Autopiloten zu testen

2026-04-20
SRF News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Tesla's Autopilot) whose malfunction directly caused physical harm and fatalities, fulfilling the criteria for an AI Incident. The concealment of incidents and the system's failure to act appropriately are direct causes of harm. The legal rulings and investigations further confirm the realized harm and the AI system's pivotal role. Therefore, this is classified as an AI Incident.
Thumbnail Image

Tesla-Autopilot: Tödliche Unfälle intern bekannt

2026-04-20
20 Minuten
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system involved in autonomous driving decisions. The malfunction of this AI system directly led to a fatal accident, causing death and injury, which is a clear harm to persons. The company's prior knowledge of the malfunction and failure to warn users further implicates the AI system's use and development in the harm. Therefore, this event meets the definition of an AI Incident due to direct harm caused by the AI system's malfunction and use.
Thumbnail Image

Tesla verschwieg laut Datenleck Tausende Autopilot-Unfälle

2026-04-20
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system used for autonomous driving. The leaked data documents thousands of accidents and fatalities caused by the system's malfunctions, including AI 'hallucinations' leading to dangerous behavior. This constitutes direct harm to people (injury and death), meeting the definition of an AI Incident. The event is not merely a potential hazard or complementary information but a clear case of AI-related harm. Therefore, the classification is AI Incident.
Thumbnail Image

Enthüllt! Tesla verschweigt tausende Autopilot-Unfälle | Heute.at

2026-04-20
Heute.at
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system involved in vehicle control. The reported accidents and fatalities are direct harms caused by the AI system's malfunctioning. The article details realized harm to people (fatalities and injuries), legal rulings, and regulatory investigations, all stemming from the AI system's use and malfunction. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction and use.
Thumbnail Image

Autobauer: Tesla droht Massenklage wegen autonomem Fahren

2026-04-20
Handelsblatt
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, Tesla's Full Self-Driving software, which is central to the dispute. The harm arises from the use and deployment of this AI system, where customers paid for autonomous driving capabilities that are not delivered due to hardware limitations and Tesla's failure to provide upgrades or clear information. This leads to a breach of consumer rights and financial harm, fulfilling the criteria for an AI Incident. The mass lawsuit and widespread customer impact further underscore the significance of the harm. The event is not merely a potential risk or complementary information but a realized harm caused by the AI system's deployment and associated corporate practices.
Thumbnail Image

Geleakte Daten zeigen: Tesla verschwieg Hunderte Unfälle mit selbst­fahrenden Autos

2026-04-20
Der Bund
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it uses AI to interpret the environment and make driving decisions. The event involves the use and malfunction of this AI system, which directly led to multiple accidents causing injury and death, fulfilling the criteria for an AI Incident. The company's knowledge of these issues and continued deployment on public roads further supports the classification. The legal case and compensation awarded confirm the harm caused. Hence, this is not merely a hazard or complementary information but a confirmed AI Incident.
Thumbnail Image

Tesla verheimlichte Autopilot-Unfälle - Behörden ermitteln

2026-04-21
Nau
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system that autonomously controls vehicle driving functions. The article reports thousands of accidents caused by the Autopilot's malfunctions, including fatal crashes, which constitute direct harm to persons (harm category a). The concealment of these incidents and failure to address known risks indicate misuse and failure in the AI system's deployment. The involvement of whistleblowers and legal actions further confirm the direct link between the AI system's malfunction and realized harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's malfunction and use.