Chiba City Uses Digital Twin to Test Autonomous Vehicle Safety, Revealing Potential AI Hazards

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chiba City conducted Japan's first municipal digital twin simulation to test AI-driven autonomous vehicles in hazardous scenarios, such as sudden pedestrian or cyclist appearances. While no harm occurred, the tests revealed that AI systems may fail to stop in time at higher speeds, highlighting potential safety risks requiring further validation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is clearly involved, as the autonomous driving system uses AI for navigation and decision-making. The event involves the use of AI systems in simulation and testing to prevent harm. No actual harm or incident has occurred yet; the article focuses on safety verification and risk mitigation through simulation. Therefore, this event represents a plausible future risk management scenario rather than an incident or realized harm. It is not merely general AI news but a specific use case aimed at preventing harm. Hence, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if not properly validated, but currently, it is a safety validation effort preventing such harm.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

千葉市、「デジタルツイン」を活用し自動運転検証 全国初

2025-06-06
毎日新聞
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the autonomous driving system uses AI for navigation and decision-making. The event involves the use of AI systems in simulation and testing to prevent harm. No actual harm or incident has occurred yet; the article focuses on safety verification and risk mitigation through simulation. Therefore, this event represents a plausible future risk management scenario rather than an incident or realized harm. It is not merely general AI news but a specific use case aimed at preventing harm. Hence, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if not properly validated, but currently, it is a safety validation effort preventing such harm.
Thumbnail Image

自動運転サービスの安全性検証に「デジタルツイン」 現実世界を仮想空間に再現 千葉市

2025-06-04
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving AI) and their safety testing via digital twin simulations. However, the article reports on a safety verification exercise with no realized harm or incident. The focus is on assessing and improving safety to prevent future harm, which aligns with the definition of an AI Hazard (plausible future harm) rather than an AI Incident. It is not merely general AI news or product launch, as it concerns safety validation with potential to prevent harm. Therefore, this qualifies as an AI Hazard.
Thumbnail Image

政府系研究機関、自動運転「レベル3」のバス公開/台湾 - フォーカス台湾

2025-06-04
フォーカス台湾 - 中央社日本語版
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Level 3 autonomous driving with AI for safety and monitoring). However, there is no indication that the AI system has caused or contributed to any harm or incident. The event is about the development and demonstration of the system and future testing plans, which is informative but does not describe an incident or hazard. Therefore, it is best classified as Complementary Information, as it provides context and updates on AI system development and testing without reporting harm or plausible imminent harm.