Intoxicated Driver Relies on Tesla Autopilot, Car Stops on Florida Highway

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A 37-year-old woman in Florida, heavily intoxicated, activated her Tesla's Autopilot to drive home. She fell asleep, and the AI system eventually stopped the car on the highway after detecting her unresponsiveness. The incident highlights both the limitations and safety interventions of Tesla's AI, raising concerns about misuse and public safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla Autopilot system is an AI system that monitors driver attention and vehicle control, intervening autonomously to stop the car when the driver is incapacitated. The event describes the AI system's use and autonomous stopping of the vehicle to prevent harm due to the driver's intoxication. Although no injury or accident occurred, the AI system's intervention was pivotal in preventing potential harm, which qualifies this as an AI Incident under the definition of harm to a person or group (a) being directly prevented by the AI system's action. The event is not merely a potential hazard or complementary information but a real-world incident involving AI system use and safety intervention.[AI generated]
AI principles
SafetyAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Event/anomaly detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Ivre, elle s'endort dans sa Tesla en mode Autopilot : la voiture s'arrête sur l'autoroute

2026-05-03
Ouest France
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system that monitors driver attention and controls the vehicle autonomously. In this case, the driver's intoxication and unconsciousness could have led to a serious accident (harm to persons), but the AI system intervened by stopping the car safely. Since no harm occurred and the AI system's intervention prevented an incident, this event represents a plausible risk scenario where AI safety features mitigated potential harm. Therefore, it qualifies as an AI Hazard, illustrating a near-miss situation where AI prevented injury or damage.
Thumbnail Image

" Elle pensait pouvoir rentrer chez elle en toute sécurité " : Kimberley, ivre, s'endort dans sa Tesla en mode Autopilot, la voiture s'arrête en plein milieu de l'autoroute !

2026-05-03
Sudinfo.be
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot system is an AI system that monitors driver attention and vehicle control, intervening autonomously to stop the car when the driver is incapacitated. The event describes the AI system's use and autonomous stopping of the vehicle to prevent harm due to the driver's intoxication. Although no injury or accident occurred, the AI system's intervention was pivotal in preventing potential harm, which qualifies this as an AI Incident under the definition of harm to a person or group (a) being directly prevented by the AI system's action. The event is not merely a potential hazard or complementary information but a real-world incident involving AI system use and safety intervention.
Thumbnail Image

En état d'ébriété, elle active le pilote automatique de sa Tesla

2026-05-04
Capital.fr
Why's our monitor labelling this an incident or hazard?
The Tesla autopilot is an AI system involved in the event. The woman's intoxicated state and reliance on the autopilot led to a dangerous situation on a highway, with the vehicle stopping in the middle lane after the driver failed to respond to alerts. This shows the AI system's malfunction or limitation in handling a non-vigilant driver, which directly contributed to a hazardous condition. Although no accident or injury occurred, the event involved direct risk to safety and demonstrates harm or potential harm to persons, fitting the definition of an AI Incident. The article also discusses regulatory context, but the primary focus is the incident itself, not just complementary information or future hazards.
Thumbnail Image

Floride : Elle s'endort ivre au volant d'une Tesla et l'Autopilot la laisse sur l'autoroute

2026-05-04
Auto Plus
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system designed to assist driving by monitoring driver attention and controlling the vehicle. Here, the system's use by an intoxicated driver who relied on it to drive home led to a hazardous situation when the AI disengaged and the car stopped unsafely on a highway lane. This directly endangered the driver and others, fulfilling the criteria for harm to persons and communities. The AI system's failure to safely manage the vehicle's stopping location is a malfunction contributing to the incident. Therefore, this qualifies as an AI Incident due to realized harm linked to AI system use and malfunction.
Thumbnail Image

Ivre au volant de sa Tesla en Autopilot, elle s'endort sur l'autoroute : la voiture s'arrête seule

2026-05-04
Tribunal Du Net
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot is an AI system involved in real-time monitoring and control of the vehicle. The event involves the use of this AI system to manage a dangerous situation caused by the driver's intoxication and unconsciousness. The AI system's emergency protocol directly prevented potential injury or death, which is a harm to health that was averted. The incident also raises concerns about misuse and misunderstanding of AI capabilities, which are relevant to AI harm tracking. Therefore, this is an AI Incident because the AI system's use directly relates to a significant safety event involving potential harm to a person.