Waymo Robotaxis Tricked by Stop Sign T-Shirt in Arizona

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Arizona content creator Jason Carr demonstrated that Waymo's autonomous vehicles can be tricked into stopping by wearing a t-shirt with a stop sign image. The AI system misinterpreted the shirt as a real traffic signal, revealing a vulnerability that could lead to traffic disruptions or safety risks if exploited.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (autonomous vehicle) whose use (perception and decision-making) was directly affected by a simple trick, causing it to stop unnecessarily. While no actual harm (injury, property damage, or rights violation) is reported, the incident demonstrates a malfunction or failure to correctly interpret inputs, which could plausibly lead to safety risks or operational disruption if exploited or occurring in critical situations. Therefore, it qualifies as an AI Hazard because it plausibly could lead to harm, but no harm has yet occurred as per the article.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Economic/PropertyReputational

Severity
AI hazard

Business function:
Monitoring and quality controlICT management and information security

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Are autonomous vehicles really smart? Check how a simple trick can fool them

2024-05-06
https://auto.hindustantimes.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous vehicle) whose use (perception and decision-making) was directly affected by a simple trick, causing it to stop unnecessarily. While no actual harm (injury, property damage, or rights violation) is reported, the incident demonstrates a malfunction or failure to correctly interpret inputs, which could plausibly lead to safety risks or operational disruption if exploited or occurring in critical situations. Therefore, it qualifies as an AI Hazard because it plausibly could lead to harm, but no harm has yet occurred as per the article.
Thumbnail Image

A content creator with a stop sign on his t-shirt managed to stop Waymo's robotaxis in their tracks in Arizona

2024-05-05
Carscoops
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) whose use is directly linked to an operational disruption caused by a human exploiting its perception system. The AI system misinterprets the stop sign t-shirt as a legitimate traffic control signal, causing the vehicle to stop unnecessarily. This is a malfunction or failure in the AI's perception and decision-making. While no injury or property damage is reported, such misinterpretations could plausibly lead to safety hazards or traffic disruptions, which are harms related to the management and operation of critical infrastructure (transportation). Therefore, this qualifies as an AI Incident due to the direct involvement of the AI system's malfunction leading to operational disruption and potential safety risks.
Thumbnail Image

How a t-shirt stopped this autonomous car in its tracks

2024-05-07
Perth Now
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving system) whose use was manipulated by a fake stop sign on a t-shirt, causing the vehicle to stop. While no injury or damage occurred, the incident reveals a plausible risk of harm due to the AI system's sensitivity to visual inputs, which could lead to traffic disruptions or accidents in other circumstances. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm or disruption, but no actual harm has been reported in this case.
Thumbnail Image

How a t-shirt stopped this autonomous car in its tracks

2024-05-07
The West Australian
Why's our monitor labelling this an incident or hazard?
An AI system (autonomous driving system) is explicitly involved, as the vehicles' AI safety systems interpret the t-shirt's stop sign image as a real traffic sign, causing them to stop. This is a use case of the AI system's perception and decision-making. The event shows a malfunction or misinterpretation by the AI system leading to an operational disruption (vehicles stopping unexpectedly). However, there is no indication of actual harm (injury, property damage, or rights violation) occurring, only a demonstration of a potential vulnerability. Therefore, this event plausibly could lead to harm if exploited maliciously or in other contexts, but no harm has yet occurred. This fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to an AI Incident in the future.
Thumbnail Image

Man Fools Waymo Self-Driving Cars With Stop Sign T-Shirt - CarScoops - Business Telegraph

2024-05-05
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving AI) whose behavior is tested and shown to be susceptible to a low-tech trick. While the AI system's malfunction or misinterpretation could plausibly lead to harm (e.g., unexpected stops causing traffic disruption or accidents), no actual harm or incident has been reported. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if such edge cases are exploited or cause real-world problems. It is not merely general AI news, nor is it a complementary information update about responses or governance.