Tesla FSD Under Scrutiny: Safety Risks, Misuse, and Regulatory Investigations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's Full Self-Driving (FSD) AI system faces global scrutiny after reports of misuse, regulatory warnings, and investigations into crashes, including fatal ones. Incidents include illegal FSD activation in Korea, misleading promotion to vision-impaired drivers, and NHTSA's probe into FSD's safety in adverse conditions. However, FSD has also demonstrated harm prevention in some cases.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system involved is Tesla's FSD, an AI-based driver-assist system. The event stems from the use and promotion of the AI system in a context where the user is not capable of fulfilling the required driver responsibilities due to deteriorating eyesight. Tesla's amplification of a testimonial endorsing FSD for a vision-impaired driver creates a dangerous misconception about the system's capabilities, increasing the risk of harm. This directly relates to harm to persons (a), as the system's misuse or misunderstanding can lead to accidents. The event also references ongoing investigations and lawsuits related to FSD safety, reinforcing the link to actual or potential harm. Therefore, this is an AI Incident due to the realized or imminent risk of injury caused by the AI system's use and promotion in unsafe conditions.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (death)Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

"What if the car gets hacked?": Netizens react after Elon Musk claims Tesla's "AI self-driving" will be ten times safer than human driving

2026-03-30
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article centers on Tesla's AI self-driving system and related safety claims, which involve an AI system. There is no report of actual harm or malfunction caused by the AI system, so it is not an AI Incident. While concerns about hacking imply a plausible future risk, no specific event or credible warning of imminent harm is described, so it does not meet the threshold for an AI Hazard. The legal dispute and regulatory actions represent governance responses to prior concerns about misleading advertising and safety claims, fitting the definition of Complementary Information. Hence, the article is best classified as Complementary Information.
Thumbnail Image

Tesla carelessly promotes 'Full Self-Driving' for driver losing his eyesight

2026-03-29
Electrek
Why's our monitor labelling this an incident or hazard?
The AI system involved is Tesla's FSD, an AI-based driver-assist system. The event stems from the use and promotion of the AI system in a context where the user is not capable of fulfilling the required driver responsibilities due to deteriorating eyesight. Tesla's amplification of a testimonial endorsing FSD for a vision-impaired driver creates a dangerous misconception about the system's capabilities, increasing the risk of harm. This directly relates to harm to persons (a), as the system's misuse or misunderstanding can lead to accidents. The event also references ongoing investigations and lawsuits related to FSD safety, reinforcing the link to actual or potential harm. Therefore, this is an AI Incident due to the realized or imminent risk of injury caused by the AI system's use and promotion in unsafe conditions.
Thumbnail Image

Tesla FSD mocks BMW human driver: Saves pedestrian from near miss

2026-03-30
TESLARATI
Why's our monitor labelling this an incident or hazard?
The Tesla FSD is an AI system that uses neural networks to interpret human behavioral cues and make driving decisions. The event involves the AI system's use in real-world driving where it prevented a near miss with a pedestrian, thus directly contributing to harm avoidance. Since the AI system's involvement led to a safety benefit and prevented injury, this qualifies as an AI Incident related to injury or harm to a person (a).
Thumbnail Image

Tesla Faces Backlash Over Cybertruck FSD Testimonial From Vision-Impaired Drivers

2026-03-30
Tech Times
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system providing driver-assistance. The viral video and promotional content misrepresent its capabilities, leading to overreliance by drivers with impaired vision, which is a misuse of the AI system. This misuse has already resulted in reported crashes and traffic violations, with regulatory investigations underway. The harm to driver safety and public safety is direct and significant. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm to persons.
Thumbnail Image

Gov't warns Korean Tesla owners that illegally enabling FSD is a criminal offense

2026-03-31
중앙일보
Why's our monitor labelling this an incident or hazard?
The article describes a situation where an AI system (Tesla's FSD) is being illegally activated, which is a misuse of the AI system. The government warns that such unauthorized use violates safety laws and could lead to criminal penalties. While no actual harm or incident has been reported, the unauthorized activation of an AI system that controls vehicle operation could plausibly lead to harm or legal violations. Therefore, this event qualifies as an AI Hazard because it highlights a credible risk of harm stemming from misuse of an AI system, but no realized harm or incident is described.
Thumbnail Image

Tesla's 3.2 Million Cars Under the Microscope: Inside NHTSA's Largest-Ever FSD Investigation

2026-03-30
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Tesla's Full Self-Driving software) whose use in real-world conditions has been linked to multiple crashes, including fatal ones. The investigation by NHTSA is a direct response to these harms, indicating that the AI system's malfunction or inadequacy has contributed to injury or death. This meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to harm to persons. The event is not merely a potential risk or a complementary update but a formal probe into realized harms associated with the AI system's operation.
Thumbnail Image

Elon Musk Highlights Starship as Planet-Colonizer and Tesla FSD Safety Milestone

2026-03-29
International Business Times AU
Why's our monitor labelling this an incident or hazard?
The content focuses on achievements and future plans related to AI systems (Tesla FSD and Starship's autonomous capabilities) without describing any actual harm or incidents. There is no mention of accidents, malfunctions, or violations caused by these AI systems. The article serves as an informational update and contextual background on these technologies and their potential impact, fitting the definition of Complementary Information rather than an Incident or Hazard.