Japan Expands Autonomous Bus Trials to Address Driver Shortage and Test AI Safety in Adverse Conditions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Japanese government and companies are accelerating autonomous vehicle deployment, including large self-driving bus trials at Fukuoka Airport. These tests, using AI-driven systems with rain sensors, aim to verify safety in various conditions and address future driver shortages. No AI-related harm has been reported during these ongoing experiments.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a real-world test of an AI system (autonomous driving bus) with a human driver supervising. The system is actively used but only at Level 2 autonomy, meaning the driver assists and monitors. The purpose is to prepare for future driver shortages and to verify safety under various conditions. Since no harm or incident has occurred, but the system's use could plausibly lead to harm if it malfunctions or fails, this fits the definition of an AI Hazard. The article does not focus on responses to past incidents or broader governance, so it is not Complementary Information. It is not unrelated because an AI system is clearly involved and tested in a real environment with potential safety implications.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governanceHuman wellbeingDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesTravel, leisure, and hospitalityGovernment, security, and defenceDigital security

Harm types
Physical (injury)Physical (death)Economic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
Monitoring and quality controlCitizen/customer service

AI system task:
Recognition/object detectionEvent/anomaly detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

西鉄、福岡空港でバス自動運転実験 運転手不足に備え

2023-07-05
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article describes a real-world test of an AI system (autonomous driving bus) with a human driver supervising. The system is actively used but only at Level 2 autonomy, meaning the driver assists and monitors. The purpose is to prepare for future driver shortages and to verify safety under various conditions. Since no harm or incident has occurred, but the system's use could plausibly lead to harm if it malfunctions or fails, this fits the definition of an AI Hazard. The article does not focus on responses to past incidents or broader governance, so it is not Complementary Information. It is not unrelated because an AI system is clearly involved and tested in a real environment with potential safety implications.
Thumbnail Image

福岡空港で大型バスの自動運転 社員が利用し検証

2023-07-05
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving system) in a real-world setting. However, there is no indication that any harm has occurred or that the system malfunctioned leading to harm. The article describes ongoing safety verification and improvements to the system. Therefore, it does not qualify as an AI Incident. It also does not describe a plausible future harm scenario beyond the normal risks inherent in testing autonomous vehicles, which is expected and managed. Hence, it is not an AI Hazard. The article provides complementary information about the development and testing of an AI system, contributing to understanding of AI deployment and safety efforts.
Thumbnail Image

福岡空港で大型バスの自動運転実験・第2弾 "雨が降ったらどうなるか"安全性を確認|FNNプライムオンライン

2023-07-05
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (autonomous driving AI controlling a bus). The event is a controlled experiment to test the system's safety in rain, with no reported harm or malfunction. Therefore, it does not qualify as an AI Incident. Since the experiment is ongoing and aims to verify safety, it could plausibly lead to harm if the system fails in the future, but currently no harm is reported. Hence, it fits the definition of an AI Hazard, as it involves plausible future risk during testing of an AI system with potential safety implications.
Thumbnail Image

政府、自動運転車のインフラ計画 デジタル技術の活用で支援する「自動運転支援道」を整備|政治・行政・自治体|紙面記事

2023-07-05
日刊自動車新聞 電子版
Why's our monitor labelling this an incident or hazard?
The event involves the use and support of AI systems (autonomous driving technology) through infrastructure development. However, the article focuses on planning and future implementation without reporting any realized harm or incidents caused by AI systems. The content reflects a potential future scenario where AI-enabled autonomous vehicles will be more widely used, but no direct or indirect harm has occurred yet. Therefore, it qualifies as an AI Hazard, as the deployment of autonomous vehicles could plausibly lead to AI incidents in the future, but no incident has been reported at this stage.