
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Yongin City, South Korea, began safety inspections and test runs for its autonomous bus pilot project, involving AI-driven vehicles operating between local landmarks. City officials, including the mayor, emphasized passenger safety and system reliability. No incidents have occurred, but the project highlights potential AI-related risks during public transport trials.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles, but no harm or incident has occurred yet. The article discusses ongoing testing and safety measures to prevent harm. Therefore, it represents a plausible future risk scenario where AI system malfunction or failure could lead to harm, but no actual harm has been reported. This fits the definition of an AI Hazard, as the autonomous driving AI system's use could plausibly lead to an AI Incident if failures occur during operation. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since it clearly involves AI systems in a real-world application with potential safety implications.[AI generated]