Tesla Autopilot and FSD AI Systems Under Scrutiny After Phantom Braking and Recorded Crashes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's AI-driven Autopilot and FSD systems have faced increased scrutiny after a surge in 'phantom braking' complaints and the first recorded FSD Beta crashes, where vehicles struck roadside barriers. U.S. senators and regulators are investigating, citing safety risks and harm linked to Tesla's vision-only AI approach and system malfunctions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly: Tesla's Autopilot and FSD, which use AI for autonomous driving decisions. The sudden phantom braking is a malfunction or failure in the AI system's perception and decision-making, directly causing safety hazards and physical harm risks to vehicle occupants and others on the road. The increase in complaints and regulatory investigation confirm that harm is occurring. The AI system's design change (removal of radar and reliance on vision-only AI) is causally linked to the increased incidents. This meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm or significant safety risks.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

3个月107起,特斯拉拿掉雷达后,幽灵刹车投诉激增-36氪

2022-02-08
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: Tesla's Autopilot and FSD, which use AI for autonomous driving decisions. The sudden phantom braking is a malfunction or failure in the AI system's perception and decision-making, directly causing safety hazards and physical harm risks to vehicle occupants and others on the road. The increase in complaints and regulatory investigation confirm that harm is occurring. The AI system's design change (removal of radar and reliance on vision-only AI) is causally linked to the increased incidents. This meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm or significant safety risks.
Thumbnail Image

特斯拉FSD Beta事故首次被录下:撞上护栏!

2022-02-07
驱动之家
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system designed for autonomous driving. The incident involved the AI system controlling the vehicle and causing it to deviate from the intended path, resulting in a collision with roadside barriers. This is a direct harm to property and a safety hazard, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system's malfunction during use.
Thumbnail Image

特斯拉FSD Beta事故首次被录下 撞到马路柱子

2022-02-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta is an AI system designed for autonomous driving. The incident involves the use of this AI system, which directly caused a collision, thus fulfilling the criteria for an AI Incident due to harm to property. Although the damage was minor, the AI system's failure to prevent the crash is a direct cause of harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美国参议员:对特斯拉自动驾驶系统表示担忧

2022-02-08
中华网科技公司
Why's our monitor labelling this an incident or hazard?
Tesla's autonomous driving system is an AI system involved in vehicle operation. The senators' concerns and references to past crashes and safety defects indicate that the AI system's use has led to or contributed to harm or risk of harm to drivers and road users. The event involves the use and potential malfunction of the AI system causing safety hazards, meeting the criteria for an AI Incident. The call for regulatory investigation and enforcement further supports the presence of realized or ongoing harm rather than just potential future harm.
Thumbnail Image

Model 3/Y是重灾区 特斯拉车主上报数十起"幻影制动"事件

2022-02-07
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's autonomous driving system with AP and FSD) whose malfunction (false positive collision detection causing phantom braking) has directly led to safety risks and potential harm to vehicle occupants and other road users. The complaints and reported incidents indicate that harm or risk of harm has materialized, qualifying this as an AI Incident under the framework. The AI system's malfunction is the pivotal factor causing the harm, and the event is not merely a potential hazard or complementary information but a realized safety issue.
Thumbnail Image

特斯拉刚刚召回81万辆汽车!马斯克又被打脸......

2022-02-07
163.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta system is an AI system involved in autonomous driving decisions. The reported collision and the 'phantom braking' complaints indicate malfunctions or errors in the AI system's operation, which have directly or indirectly caused harm or risk to vehicle occupants and other road users. The official recall due to software errors further confirms the AI system's role in safety-related issues. Since actual incidents and safety risks have materialized, this event meets the criteria for an AI Incident rather than a hazard or complementary information.