Pedestrians Injured in Autonomous Bus Accident in Yahiko, Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An autonomous bus in Yahiko, Japan, struck two pedestrians after switching from AI to manual operation when the AI detected people ahead. The incident, attributed to possible human error during manual driving, resulted in injuries and led to the suspension of the bus service for investigation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes an accident involving an autonomous bus (an AI system) that was being manually operated at the time of the accident. Two pedestrians were injured, which is harm to persons. Although the accident was caused by human operator error rather than an AI malfunction, the AI system's presence and operation context are central to the incident. The bus is an AI system, and the incident occurred during its operation, leading to direct harm. Therefore, this qualifies as an AI Incident under the definition, as the AI system's use indirectly led to harm through human error during manual operation.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

自動バス手動運転中事故 弥彦 人為的ミスか2人はねる

2026-04-13
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event describes an accident involving an autonomous bus (an AI system) that was being manually operated at the time of the accident. Two pedestrians were injured, which is harm to persons. Although the accident was caused by human operator error rather than an AI malfunction, the AI system's presence and operation context are central to the incident. The bus is an AI system, and the incident occurred during its operation, leading to direct harm. Therefore, this qualifies as an AI Incident under the definition, as the AI system's use indirectly led to harm through human error during manual operation.
Thumbnail Image

自動運転バスの「下敷きになった」通報、男女2人搬送 新潟県弥彦村:朝日新聞

2026-04-12
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The incident involves a self-driving bus operated by AI that caused harm to two individuals, resulting in their emergency hospitalization. The AI system's malfunction or failure in operation directly caused injury, fulfilling the criteria for an AI Incident under harm to persons.
Thumbnail Image

「ヒューマンエラーの可能性」 新潟の自動運転バス事故、村が会見:朝日新聞

2026-04-13
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The autonomous bus is an AI system as it performs automated navigation and pedestrian detection. The accident caused injury to two people, fulfilling the harm criterion. The incident involved the AI system's use and a human operator's manual intervention, which led to the harm. This direct link to injury caused by the AI system's operation and human error classifies the event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

事故車と同じ車両、札幌と兵庫でも実験 新潟の自動運転バス:朝日新聞

2026-04-13
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The AI system (Level 2 autonomous driving) was explicitly mentioned but was turned off during the accident, and the operator's error is the suspected cause. Since the AI system was not active, it did not contribute to the harm. The article provides information about the AI system's use in other locations without incidents, which is supportive context rather than a new incident or hazard. Hence, the event is best classified as Complementary Information.
Thumbnail Image

自動運転バス、歩行者2人はねる 手動運転時か 新潟・弥彦村

2026-04-13
毎日新聞
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system, specifically an autonomous driving bus operating at Level 2 automation, which includes AI-assisted driving functions. The harm (injury to pedestrians) directly resulted from the use of this AI system, even though the driver was manually controlling the vehicle at the time. The AI system's involvement in the operation and prior incidents indicates a direct link to harm. Therefore, this qualifies as an AI Incident due to injury to persons caused during the use of an AI system.
Thumbnail Image

自動運転バス事故 村長が謝罪、再発防止誓う 新潟・弥彦

2026-04-13
毎日新聞
Why's our monitor labelling this an incident or hazard?
The bus involved is an autonomous vehicle equipped with an AI system for self-driving. The accident occurred during a transition from autonomous to manual control, with the AI system initially detecting pedestrians and stopping automatically, but the subsequent manual operation led to the collision. The harm (injury to pedestrians) has occurred and is directly linked to the AI system's operation and its interaction with manual control. Hence, this is an AI Incident as the AI system's use and malfunction contributed to physical harm.
Thumbnail Image

[弥彦自動運転バス事故]弥彦村長が会見で謝罪 事故現場55m手前で手動運転に切り替えたと説明 | 新潟日報

2026-04-13
新潟日報モア
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely an autonomous driving system on a bus. The accident resulted in injury to pedestrians, which is a direct harm to persons. The autonomous system was active and then switched to manual operation, but the collision still occurred, indicating a failure or malfunction in the AI system's operation or its interaction with the human operator. Therefore, this qualifies as an AI Incident due to direct harm caused by the use and malfunction of an AI system.