Tesla Full Self-Driving AI Fails to Yield to Pedestrian at Crosswalk

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple reports highlight a Tesla vehicle using Full Self-Driving (FSD) beta software in San Francisco failing to yield to a pedestrian at a marked crosswalk, despite detecting her presence. The AI system's malfunction violated traffic laws and posed a direct safety risk, raising concerns about the reliability of Tesla's autonomous driving technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

Tesla's FSD is an AI system designed for autonomous driving. The system's failure to respond to a pedestrian at a crosswalk is a malfunction during its use, which directly risks injury or harm to a person. Although the video does not specify if an accident occurred, the described behavior constitutes an AI Incident due to the direct potential for harm resulting from the AI system's malfunction.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesConsumer products

Affected stakeholders
General public

Harm types
Physical (injury)Reputational

Severity
AI incident

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisation

In other databases

Articles about this incident or hazard

Thumbnail Image

WATCH: As Tesla FSD Chooses To IGNORE Pedestrian At Road Crossing

2023-05-18
AutoSpies.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving. The system's failure to respond to a pedestrian at a crosswalk is a malfunction during its use, which directly risks injury or harm to a person. Although the video does not specify if an accident occurred, the described behavior constitutes an AI Incident due to the direct potential for harm resulting from the AI system's malfunction.
Thumbnail Image

This is why Elon Musk is right about Tesla's 'ChatGPT moment'

2023-05-17
TechRadar
Why's our monitor labelling this an incident or hazard?
The article centers on Tesla's AI system for autonomous driving and its potential to reach full autonomy soon. It mentions a recall related to safety risks but clarifies that the issue is being mitigated through an over-the-air software update, with no reported accidents or injuries. Therefore, no realized harm has occurred, and the article primarily discusses the plausible future impact and challenges of AI in self-driving cars. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to harm (e.g., accidents due to AI misinterpretation), but no incident has yet occurred. It is not Complementary Information because the recall and update are part of the current situation rather than a response to a past incident with harm. It is not an AI Incident because no harm has materialized. It is not Unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Self-driving Tesla doesn't yield for pedestrian. Tesla fan cheers.

2023-05-17
Mashable
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving Beta is an AI system controlling vehicle behavior. The video shows the AI system detecting a pedestrian but choosing not to yield, violating traffic laws designed to protect pedestrian safety. This behavior is a direct use of the AI system leading to a traffic violation and a safety hazard. The event involves realized misuse or malfunction of the AI system's decision-making, which could cause injury or harm to pedestrians if repeated. The presence of the AI system and its role in the incident is explicit and central. Although no injury occurred, the violation and risk are sufficient to classify this as an AI Incident rather than a mere hazard or complementary information. The social and legal concerns raised further support the significance of the incident.
Thumbnail Image

Tesla's "Full Self-Driving" sees pedestrian, chooses not to slow down

2023-05-16
Ars Technica
Why's our monitor labelling this an incident or hazard?
The Tesla FSD Beta software is an AI system involved in autonomous driving decisions. The video evidence and regulatory concerns show that the AI system's behavior leads to unsafe driving around pedestrians, directly risking injury or harm to people, which fits the definition of an AI Incident. The system's failure to comply with traffic laws and the resulting investigations further support this classification. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

No, It's Not Amazing That A Tesla Using FSD Blew Through A Crosswalk

2023-05-18
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous driving decisions. The event shows the AI system's use leading to a failure to comply with traffic laws and potentially endangering pedestrian safety, which constitutes indirect harm to health and safety. Since the incident has already occurred and involves realized risk to pedestrian safety, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Tesla in

2023-05-18
Carscoops
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system involved in autonomous driving decisions. Its failure to yield to a pedestrian directly relates to the use and malfunction of the AI system, leading to a safety hazard that could cause injury or harm to a person. Although no injury occurred, the incident demonstrates a direct risk to pedestrian safety and a violation of traffic laws, constituting an AI Incident due to the realized unsafe behavior and potential harm.