Autonomous Vehicles Piloted in China, South Korea, and US Military with No Reported Harm Yet

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China, South Korea, and the US are advancing autonomous vehicle deployments: China anticipates fully autonomous cars by 2030, Seoul is piloting RoboRide self-driving taxis with safety drivers, and the US Army tested an autonomous military vehicle in Germany. No incidents or harm have been reported, but future risks remain plausible.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article discusses the development and future deployment of AI systems in autonomous driving but does not describe any realized harm or incident caused by these AI systems. It focuses on the potential and progress of autonomous vehicle technology, including challenges and risks, but no direct or indirect harm has occurred yet. Therefore, this is a plausible future risk scenario related to AI systems, qualifying it as an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rightsDemocracy & human autonomyFairnessHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareGovernment, security, and defence

Harm types
Physical (death)Physical (injury)Economic/PropertyReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI hazard

Business function:
LogisticsCitizen/customer serviceMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

畢馬威中國報告:自動駕駛汽車2030年或將在內地行駛

2022-06-11
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article discusses the development and future deployment of AI systems in autonomous driving but does not describe any realized harm or incident caused by these AI systems. It focuses on the potential and progress of autonomous vehicle technology, including challenges and risks, but no direct or indirect harm has occurred yet. Therefore, this is a plausible future risk scenario related to AI systems, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

組圖:美國交付德軍「起源計劃」無人車 - 大紀元

2022-06-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The vehicle is an AI system as it performs autonomous navigation, obstacle avoidance, and automatic threat detection and reporting. Its use in military operations with potential armaments implies a risk of harm to persons or property if malfunction or misuse occurs. Although no harm is reported yet, the autonomous military vehicle's deployment and capabilities plausibly could lead to injury, property damage, or other harms, qualifying this as an AI Hazard rather than an Incident since no actual harm is described.
Thumbnail Image

美公路局調查升級 83萬輛特斯拉或將召回 - 自由財經

2022-06-10
自由時報電子報
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system designed to assist driving by navigating and avoiding collisions. The investigation is based on real accidents causing injury and death where the Autopilot system was active and the driver was following system instructions. This indicates the AI system's malfunction or failure contributed directly or indirectly to harm. The 'phantom braking' issue, while not yet causing accidents, is part of the investigation into system safety. The scale of the investigation and potential recall further supports classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

自動駕駛計程車開進江南鬧區 8月正式服務 | 國際焦點 | 國際 | 經濟日報

2022-06-09
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving AI) in a real-world urban environment, which could plausibly lead to harm such as accidents or injuries if the system malfunctions or fails. However, the article does not report any actual harm, injury, or incident caused by the AI system. It describes a planned service launch with safety precautions and trial runs. Therefore, this qualifies as an AI Hazard because the autonomous driving AI could plausibly lead to harm in the future, but no incident has occurred yet.
Thumbnail Image

Cruise自動駕駛出租車 準備好迎接黃金時代了嗎? - 大紀元

2022-06-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Cruise's autonomous driving technology. It discusses the use of this AI system in commercial passenger transport and highlights unresolved safety and regulatory issues that could plausibly lead to harm, such as traffic congestion, unsafe vehicle behavior, and emergency vehicle obstruction. No actual harm or injury is reported, so it does not meet the criteria for an AI Incident. However, the credible concerns about safety and regulatory gaps mean the AI system's deployment could plausibly lead to harm, fitting the definition of an AI Hazard. The article's main focus is on these potential risks and regulatory challenges, not on responses or updates to past incidents, so it is not Complementary Information.
Thumbnail Image

自動駕駛計程車開進江南鬧區 8月正式服務 | 國際 | 中央社 CNA

2022-06-09
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving technology) in a real-world setting. However, the article does not report any harm or incident caused by the AI system. Instead, it describes the planned deployment and trial of the system with safety measures in place. There is no indication of injury, disruption, rights violations, or other harms. The event is a deployment of AI technology with potential future risks but no realized harm yet. Therefore, it qualifies as an AI Hazard because the autonomous taxi service could plausibly lead to harm in the future due to the complexity of urban driving, but no harm has occurred so far.
Thumbnail Image

現代汽車將在首爾江南區試行 RoboRide 自駕叫車服務

2022-06-09
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Level 4 autonomous driving technology) being used in a real-world pilot. However, the article does not report any injury, disruption, rights violation, or other harm caused by the AI system. The service is in a trial phase with safety measures such as safety drivers and remote assistance. While autonomous vehicles inherently carry some risk, the article does not describe any incident or credible near-miss that would qualify as an AI Incident or AI Hazard. The content is primarily an announcement of a pilot deployment and plans for further development, which fits the category of Complementary Information as it provides context and updates on AI system deployment and governance efforts.