Osaka Expo Autonomous Bus Crash Caused by AI System Communication Error

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An autonomous bus at the Osaka-Kansai Expo collided with a concrete wall due to an AI system communication setting error. The system sent data at a speed the vehicle could not process, preventing the parking brake from activating. No injuries occurred, but the incident caused property damage and prompted a temporary suspension of operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as an autonomous driving system controlling a bus. The malfunction of the AI system directly led to physical harm to property (collision with a concrete wall) and posed a potential risk to human safety, although no injuries occurred. The cause was a system setting error and insufficient pre-operation testing, both related to the AI system's development and use. Therefore, this qualifies as an AI Incident due to the realized harm and direct involvement of an AI system malfunction.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Mobility and autonomous vehicles

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

大阪万博の自動運転バス事故「システム設定不備が原因」 大阪メトロ

2025-06-11
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving system controlling a bus. The malfunction of the AI system directly led to physical harm to property (collision with a concrete wall) and posed a potential risk to human safety, although no injuries occurred. The cause was a system setting error and insufficient pre-operation testing, both related to the AI system's development and use. Therefore, this qualifies as an AI Incident due to the realized harm and direct involvement of an AI system malfunction.
Thumbnail Image

万博 会場への自動運転シャトルバスの事故 設定ミスが原因 | NHK

2025-06-11
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving system controlling a shuttle bus. The accident was caused by a malfunction (system error due to incorrect communication speed settings) in the AI system, which directly led to the bus moving unintentionally and colliding with a wall. Although no injuries occurred, the event caused harm to property (the bus and wall) and posed a risk to safety. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm (property damage and potential risk to people).
Thumbnail Image

事故原因はシステムの設定ミス 万博の自動運転バス、テスト後再開へ:朝日新聞

2025-06-11
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The autonomous bus uses an AI system for self-driving. The accident was caused by a malfunction related to the AI system's communication settings, which led to failure of the parking brake and collision with a wall. This constitutes an AI Incident because the AI system's malfunction directly led to harm to property (damage to the bus and wall) and a safety incident. Even though no injuries occurred, the harm to property and the malfunction qualify this as an AI Incident under the framework.
Thumbnail Image

万博自動運転バス事故原因は通信速度の設定ミス 大阪メトロ、再開へ12日からテスト走行

2025-06-11
産経ニュース
Why's our monitor labelling this an incident or hazard?
The bus is described as an autonomous vehicle using an automatic driving system, which qualifies as an AI system. The accident was caused by a malfunction (communication speed setting mismatch) in the AI system leading to failure of the parking brake and collision with a concrete wall. Although no injuries occurred, there was harm to property (the bus and the wall) and a safety incident. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (property damage) and risk to safety. The event is not merely a potential hazard since the harm occurred, nor is it complementary information or unrelated.
Thumbnail Image

万博・自動運転バス事故『原因=車両が受信できない速度(500kbps)でデータ送信していた』大量のエラーデータで必要なブレーキ情報伝わらず...大阪メトロ | MBSニュース

2025-06-11
mbs.jp
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving system controlling a bus. The malfunction in the AI system's communication caused the bus to move and collide with a wall, which is a direct harm to property and a safety incident. Although no people were injured, the AI system's malfunction directly led to the accident. Therefore, this qualifies as an AI Incident under the definition of harm to property and potential harm to people due to AI system malfunction.
Thumbnail Image

【速報】万博『自動運転バス』事故原因は「システム上での大量のエラーデータ送信」 公道でテスト運転12日から再開|YTV NEWS NNN

2025-06-11
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an autonomous driving system, which malfunctioned due to system errors causing failure in critical vehicle control communication. This malfunction directly led to a collision, constituting harm to property. Although no physical injuries occurred, the AI system's malfunction caused a tangible incident. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm (property damage).
Thumbnail Image

先進モビリティ、万博の自動運転バス 事故発生は特定車両のみと確認|交通・物流・架装|紙面記事

2025-06-13
日刊自動車新聞 電子版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an autonomous driving system (an AI system) installed on buses used at the Osaka-Kansai Expo. An accident occurred involving one of these AI-equipped buses due to a malfunction (parking brake failure). This malfunction directly led to an incident involving physical harm or risk. The company's confirmation that the issue is limited to a specific vehicle model and does not affect others is additional information but does not negate the fact that an AI system malfunction caused an incident. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm or risk of harm.