AAA Tests Reveal AI Driving Assistance Systems Fail to Prevent Collisions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AAA testing of AI-powered driver assistance systems in vehicles from Subaru, Hyundai, and Tesla found these systems frequently failed to avoid collisions with cyclists and oncoming cars during controlled tests. The inconsistent performance highlights ongoing safety risks and potential harm from current AI driving technologies.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Subaru 'Eyesight' system is an AI-based driver assistance technology that should detect obstacles and initiate braking to prevent collisions. The failure to detect the cyclist and prevent collisions in 33% of tests shows a malfunction or inadequacy in the AI system's performance, directly leading to harm (collisions). Although the harm is simulated with a dummy, the test demonstrates a real risk of injury to people in actual use. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction and harm risk.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareConsumer products

Affected stakeholders
General publicConsumers

Harm types
Physical (injury)Physical (death)Economic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Disastrous 'driverless' car test sees dummy cyclist hit five times in fifteen tries

2022-05-17
EXPRESS
Why's our monitor labelling this an incident or hazard?
The Subaru 'Eyesight' system is an AI-based driver assistance technology that should detect obstacles and initiate braking to prevent collisions. The failure to detect the cyclist and prevent collisions in 33% of tests shows a malfunction or inadequacy in the AI system's performance, directly leading to harm (collisions). Although the harm is simulated with a dummy, the test demonstrates a real risk of injury to people in actual use. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction and harm risk.
Thumbnail Image

AAA Crash Test Shows Driving Assistance Systems Still Underperform

2022-05-19
Motor1.com
Why's our monitor labelling this an incident or hazard?
The driver assistance systems tested (EyeSight, Highway Driving Assist, Tesla Autopilot) are AI systems that perform real-time environment perception and decision-making to assist driving. The tests showed these systems failed to avoid collisions in some cases, indicating malfunction or underperformance. Since collisions could cause injury or harm to people, this qualifies as an AI Incident due to direct harm risk from AI system malfunction or inadequacy. The article reports actual test results demonstrating these failures, not just potential risks, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AAA: Driver Assist Systems Don't Work Well - Kelley Blue Book

2022-05-18
Kbb.com
Why's our monitor labelling this an incident or hazard?
Driver-assist systems described are AI systems as they perform real-time decision-making to detect objects and apply braking autonomously. The article documents tests where these systems failed to avoid collisions with obstacles and bicyclists, demonstrating malfunction or insufficient performance. This directly relates to potential injury or harm to persons, fulfilling the criteria for an AI Incident. Although the tests are controlled, the findings imply real safety risks if these systems are used as intended or overly relied upon, thus constituting realized harm or at least direct evidence of harm potential in actual use. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Carmakers Sell 'False Bill of Goods' on Automated Driving Tech: AAA

2022-05-18
The Drive
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (advanced driver assistance systems) used in vehicles to automate driving tasks. The testing revealed that these systems failed to prevent collisions in certain realistic scenarios, demonstrating malfunction or insufficient performance. These failures could directly lead to injury or harm to people, fulfilling the criteria for an AI Incident. The harm is not hypothetical; crashes occurred during testing, and the article emphasizes the potential for fatal accidents in real-world use. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Self-driving cars hit a third of cyclists, all oncoming cars

2022-05-16
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (active driving assist, adaptive cruise control, Tesla Autopilot) used in vehicles that make real-time decisions to avoid collisions. The tests show these AI systems failed to prevent collisions with cyclists and oncoming cars, which constitutes direct harm to persons if such failures occur in real-world conditions. The harm is realized in the testing environment and indicates a significant safety risk. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction/use and harm to people (cyclists and drivers).
Thumbnail Image

AAA survey shows drivers want ADAS that works - full autonomy can wait

2022-05-16
Autonomous Vehicle International
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ADAS Level 2) that use AI for perception and driving assistance. The testing showed these systems failed to consistently avoid collisions in edge cases, resulting in crashes with a car and a cyclist dummy. This is a direct harm linked to AI system malfunction during use. The harm is materialized (crashes occurred in testing), not just potential. The survey and testing results highlight ongoing safety issues with AI driving assistance systems, which is a direct AI Incident under the framework. The event is not merely a hazard or complementary information because harm has occurred, nor is it unrelated as AI systems are central to the event.
Thumbnail Image

Semi-Autonomous Cars Hit Cyclist In 5 Out Of 15 Test Runs, Finds AAA

2022-05-16
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ADAS) that use AI for detection and driving assistance. The failure of the Subaru's EyeSight system to detect the cyclist and prevent collisions in 33% of test runs shows a malfunction of the AI system leading directly to harm (collisions). Although the tests were conducted with dummies and on closed courses, the harm is real and relevant to human safety. This meets the definition of an AI Incident as the AI system's malfunction directly led to harm (or potential harm) to persons. The article does not merely discuss potential future harm or general AI developments, but documents actual failures causing collisions in tests, which is a realized harm scenario. Therefore, the classification is AI Incident.