
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Chiba City conducted Japan's first municipal digital twin simulation to test AI-driven autonomous vehicles in hazardous scenarios, such as sudden pedestrian or cyclist appearances. While no harm occurred, the tests revealed that AI systems may fail to stop in time at higher speeds, highlighting potential safety risks requiring further validation.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the autonomous driving system uses AI for navigation and decision-making. The event involves the use of AI systems in simulation and testing to prevent harm. No actual harm or incident has occurred yet; the article focuses on safety verification and risk mitigation through simulation. Therefore, this event represents a plausible future risk management scenario rather than an incident or realized harm. It is not merely general AI news but a specific use case aimed at preventing harm. Hence, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if not properly validated, but currently, it is a safety validation effort preventing such harm.[AI generated]