Tesla Full Self-Driving Beta Causes Recorded Collision in San Jose

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Tesla using the Full Self-Driving (FSD) Beta AI system was recorded crashing into a bike lane pole in San Jose. The incident, captured by YouTuber AI Addict, highlights the AI's failure to detect obstacles, resulting in property damage and raising concerns about the safety and reliability of Tesla's autonomous driving technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Tesla FSD Beta is an AI system designed for autonomous driving. The vehicle running into a pole is a direct incident caused by the AI system's malfunction or error in driving decisions. Although the damage is minor, it constitutes physical harm to property and potentially to safety. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla vehicle with FSD Beta reportedly runs into pole

2022-02-06
Bangalore Mirror
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system designed for autonomous driving. The vehicle running into a pole is a direct incident caused by the AI system's malfunction or error in driving decisions. Although the damage is minor, it constitutes physical harm to property and potentially to safety. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Tesla car in 'Full Self-Driving' mode hits a bollard on camera

2022-02-08
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system controlling vehicle navigation and decision-making. The video evidence shows the AI system directly causing a collision with a physical object, which is a harm to property and a safety risk to people. The system's failure to detect and avoid the bollard and other unsafe behaviors demonstrates malfunction or misuse of the AI system leading to harm. The recall due to stop sign violations further supports the presence of harm caused by the AI system's operation. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction during use.
Thumbnail Image

Watch Tesla Model Y hit pole: first FSD accident caught on video?

2022-02-06
Motor1.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system involved in autonomous driving. The accident, where the car hit a pole due to the system not recognizing it, constitutes a malfunction leading to harm (property damage). This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to property. The absence of injuries does not negate the classification, as property damage is a recognized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Tesla on Full Self Driving Beta mode crashes into pole, here's how

2022-02-06
Zee News
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self Driving Beta is an AI system designed to autonomously navigate and control the vehicle. The reported crash into a pole, with video evidence showing the AI failing to recognize the obstacle, indicates a malfunction of the AI system during operation. This malfunction directly led to physical harm to property (vehicle damage and pole impact). Despite the minor nature of the accident and the driver's responsibility, the AI system's failure was a contributing factor to the harm. Hence, this event meets the criteria for an AI Incident as the AI system's malfunction directly caused harm.
Thumbnail Image

YouTuber catches the first self-driving Tesla car accident evidence on video as it happened

2022-02-05
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving Beta software is an AI system performing autonomous driving tasks. The accident was caused by the AI system's failure to detect an obstacle and react appropriately, which directly led to a collision causing property damage. Although no injury occurred, the event involves harm to property and a safety risk to occupants, meeting the definition of an AI Incident. The driver's intervention was too late to prevent the harm, indicating a malfunction or limitation of the AI system in real-world conditions.
Thumbnail Image

Tesla Full Self-Driving Beta runs into a pole in what could be the first FSD accident caught on video

2022-02-05
Electrek
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving Beta is an AI system enabling autonomous driving. The accident was caused during the use of this AI system, which directly led to harm to property. Although the damage was minor, it qualifies as an AI Incident because the AI system's malfunction or limitations contributed to the crash. The event is not merely a potential risk but a realized harm, thus it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla FSD Beta Caught Hitting Something On Camera For The First Time

2022-02-07
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system that autonomously controls driving functions. The video evidence shows the system failing to avoid a collision and engaging in dangerous driving behaviors, which directly caused damage to property (the bollard and car) and posed risks to public safety. The AI system's malfunction and use in this context meet the criteria for an AI Incident because harm or risk of harm to people and property has occurred due to the AI system's operation.
Thumbnail Image

This Tesla Self-Driving Fail Video Is Exactly What's Wrong With Tesla Stans

2022-02-09
The Drive
Why's our monitor labelling this an incident or hazard?
The article describes a specific event where Tesla's AI-powered self-driving system malfunctioned by veering dangerously toward a cyclist, requiring human intervention to avoid a crash. This is a direct example of an AI system's malfunction leading to potential physical harm, fulfilling the criteria for an AI Incident. The involvement of the AI system is explicit, and the harm is direct and materialized (near collision). The discussion about the system's safety and regulatory scrutiny further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Watch Tesla Full Self-Driving Beta's software operate without 'rolling stops'

2022-02-05
TESLARATI
Why's our monitor labelling this an incident or hazard?
An AI system (Tesla's Full Self-Driving Beta) is explicitly involved, and the event concerns its use and modification. However, no actual harm (injury, accident, or violation) has occurred; the NHTSA's action is precautionary to prevent potential harm. Therefore, this event represents a plausible risk of harm that has been mitigated before any incident occurred. It fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as the update and recall are responses to a potential safety hazard posed by the AI system's behavior.
Thumbnail Image

Tesla vehicle with FSD Beta reportedly runs into pole

2022-02-05
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system designed for autonomous driving. The vehicle's collision with a pole indicates a malfunction or failure of the AI system to safely navigate, directly causing harm to property. Although the damage was minor, the incident qualifies as an AI Incident because the AI system's malfunction directly led to harm. The driver's responsibility and vigilance do not negate the AI system's role in the accident.
Thumbnail Image

Finally, video evidence of a Tesla crashing in 'Full Self-Driving' mode

2022-02-07
Input
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving Beta is an AI system designed for autonomous vehicle operation. The video evidence shows the AI system malfunctioning and causing a collision, which is direct harm to property. The article also references a fatal accident linked to Tesla's Autopilot, reinforcing the risk of harm from these AI systems. Since the AI system's malfunction directly led to harm (property damage) in this event, it meets the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's involvement is explicit and central to the event.
Thumbnail Image

Tesla electric car got into an accident due to an advanced autopilot failure.

2022-02-08
SciTech Europa
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system that autonomously navigates and makes driving decisions. The accident occurred due to the AI system's failure to correctly execute a turn, directly causing a collision and damage to the vehicle. The driver's inability to react in time further highlights the AI system's role in the incident. The recall of over 50,000 vehicles due to a critical FSD error that could cause failure to stop at stop signs further supports the presence of AI-related harm or risk. Since actual harm (vehicle damage) occurred and the AI system's malfunction was the direct cause, this event meets the criteria for an AI Incident.