Automated Braking Systems Fail in Tesla, Lynk & Co 02 and Xiaomi SU7, Raising Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Three EV models—Tesla with single-pedal mode, Lynk & Co 02 and Xiaomi SU7—experienced AI-driven automatic braking malfunctions. Tesla’s single-pedal brake failed on a highway; Lynk & Co’s system misapplied brakes due to sensor, software or lighting errors; Xiaomi’s SU7 secondary brake misjudged primary failure, all posing direct safety risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The automatic braking system is an AI system that makes real-time decisions based on sensor input and software processing. The article describes incidents where the system triggers braking without a valid reason, which can cause safety hazards and potential injury. This fits the definition of an AI Incident because the AI system's malfunction or erroneous output has directly led to harm or risk of harm to persons. The article focuses on the causes and implications of this malfunction, indicating realized or imminent harm rather than just potential future harm or general information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesConsumer products

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Physical (death)Economic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

领克02自动刹车无故触发的原因可能是什么?-汽车频道-和讯网

2024-05-21
和讯网
Why's our monitor labelling this an incident or hazard?
The automatic braking system is an AI system that makes real-time decisions based on sensor input and software processing. The article describes incidents where the system triggers braking without a valid reason, which can cause safety hazards and potential injury. This fits the definition of an AI Incident because the AI system's malfunction or erroneous output has directly led to harm or risk of harm to persons. The article focuses on the causes and implications of this malfunction, indicating realized or imminent harm rather than just potential future harm or general information.
Thumbnail Image

笑麻了!不满月的小米SU7坏了,评论区炸锅!

2024-05-19
163.com
Why's our monitor labelling this an incident or hazard?
The braking system malfunction in the Xiaomi SU7 involves automated decision-making likely powered by AI or advanced control algorithms, as indicated by the 'misjudgment' of brake failure and multiple system fault alerts. This malfunction directly endangered the driver's safety, constituting harm to health. The event is not merely a potential hazard but a realized incident with direct safety implications. Hence, it meets the criteria for an AI Incident under the framework, as the AI system's malfunction directly led to a significant safety risk.
Thumbnail Image

一特斯拉车主自称经历刹车失灵:闯了一个红灯后猛踩刹车才停下

2024-05-21
163.com
Why's our monitor labelling this an incident or hazard?
The incident involves a Tesla vehicle, which is known to use AI systems for driving assistance and brake control. The reported brake failure and inability to stop the car as expected indicate a malfunction of the AI system controlling or assisting braking. The driver ran a red light due to this failure, which is a direct safety hazard and potential harm to the driver and others. The AI system's malfunction directly led to this dangerous situation, meeting the criteria for an AI Incident involving harm to persons.
Thumbnail Image

车主自称经历特斯拉刹车失灵 提醒车友出现刹车警示立即停车 - Tesla 特斯拉电动汽车 - cnBeta.COM

2024-05-21
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system as Tesla's single pedal mode relies on AI and automated control for braking. The malfunction of this AI system caused the brakes to fail, creating a direct safety hazard and harm risk. The event is a clear case of an AI Incident because the AI system's malfunction directly led to a dangerous situation with potential harm to people and property.