Yongin City Conducts Safety Checks for Autonomous Bus Pilot Project

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Yongin City, South Korea, began safety inspections and test runs for its autonomous bus pilot project, involving AI-driven vehicles operating between local landmarks. City officials, including the mayor, emphasized passenger safety and system reliability. No incidents have occurred, but the project highlights potential AI-related risks during public transport trials.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in autonomous vehicles, but no harm or incident has occurred yet. The article discusses ongoing testing and safety measures to prevent harm. Therefore, it represents a plausible future risk scenario where AI system malfunction or failure could lead to harm, but no actual harm has been reported. This fits the definition of an AI Hazard, as the autonomous driving AI system's use could plausibly lead to an AI Incident if failures occur during operation. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since it clearly involves AI systems in a real-world application with potential safety implications.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Severity
AI hazard

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

용인특례시, '용인 동백 자율주행자동차 시범사업' 앞두고 사전 현장점검 진행

2026-03-25
이코노뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles, but no harm or incident has occurred yet. The article discusses ongoing testing and safety measures to prevent harm. Therefore, it represents a plausible future risk scenario where AI system malfunction or failure could lead to harm, but no actual harm has been reported. This fits the definition of an AI Hazard, as the autonomous driving AI system's use could plausibly lead to an AI Incident if failures occur during operation. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since it clearly involves AI systems in a real-world application with potential safety implications.
Thumbnail Image

용인시, '용인 동백 자율주행자동차 시범사업' 앞두고 사전 현장점검 진행

2026-03-25
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving AI) in active use for a pilot project. However, the article does not report any harm, malfunction, or misuse that has caused or could plausibly cause harm. The focus is on safety checks, trial runs, and ensuring passenger safety with human safety operators onboard. Since no harm or credible risk of harm is described, and the article mainly provides information about the AI system's deployment and governance, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

용인 자율주행차 시험운행 점검...이상일 시장 직접 탑승 확인

2026-03-26
이뉴스투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles during a test phase with safety measures in place and no reported harm or malfunction. There is no indication that any injury, rights violation, or other harm has occurred. However, since the autonomous vehicles are being tested and could plausibly lead to harm if malfunction or failure occurs in the future, this situation qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information since the article is not about a response to a past incident or broader governance context. It is not unrelated because the autonomous vehicles clearly involve AI systems.
Thumbnail Image

용인특례시, '용인 동백 자율주행자동차 시범사업' 앞두고 사전 현장점검 진행

2026-03-26
매일일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving system) in a pilot project with safety measures in place, including human safety drivers and monitoring. No injury, disruption, or rights violation has occurred. The article discusses preparations and safety assessments to prevent harm. Hence, it fits the definition of an AI Hazard, as the autonomous vehicle system could plausibly lead to harm if failures occur during operation, but no incident has yet happened.
Thumbnail Image

용인시, 동백동 '자율주행자동차 시범운행' 현장 점검 | 아주경제

2026-03-26
아주경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving system) in active use during a pilot project. However, the article does not report any injury, accident, rights violation, or other harm caused by the AI system. The presence of safety drivers and controlled testing indicates risk mitigation. Since no harm has occurred but the system's use could plausibly lead to harm in the future, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.