
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Tesla's Full Self-Driving (FSD) AI system faces global scrutiny after reports of misuse, regulatory warnings, and investigations into crashes, including fatal ones. Incidents include illegal FSD activation in Korea, misleading promotion to vision-impaired drivers, and NHTSA's probe into FSD's safety in adverse conditions. However, FSD has also demonstrated harm prevention in some cases.[AI generated]
Why's our monitor labelling this an incident or hazard?
The AI system involved is Tesla's FSD, an AI-based driver-assist system. The event stems from the use and promotion of the AI system in a context where the user is not capable of fulfilling the required driver responsibilities due to deteriorating eyesight. Tesla's amplification of a testimonial endorsing FSD for a vision-impaired driver creates a dangerous misconception about the system's capabilities, increasing the risk of harm. This directly relates to harm to persons (a), as the system's misuse or misunderstanding can lead to accidents. The event also references ongoing investigations and lawsuits related to FSD safety, reinforcing the link to actual or potential harm. Therefore, this is an AI Incident due to the realized or imminent risk of injury caused by the AI system's use and promotion in unsafe conditions.[AI generated]