China approves Level 3 autonomous vehicle tests on public roads

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China’s Ministry of Industry and Information Technology granted nine automakers approval to begin public-road tests of Level 3 autonomous driving systems, allowing drivers to take their hands off the wheel. Approved companies include Nio, BYD, Changan Auto, GAC, SAIC, BAIC BluePark, FAW, SAIC Hongyan, and Yutong Bus to accelerate semi-autonomous vehicle deployment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in autonomous vehicles being tested on public roads. While no harm has been reported yet, the deployment of level three autonomous driving technology on public roads carries plausible risks of harm to people or property if the AI systems malfunction or fail. Therefore, this event represents a credible potential risk of harm stemming from AI use, qualifying it as an AI Hazard rather than an Incident, since no actual harm has been reported yet.[AI generated]
Industries
Mobility and autonomous vehicles

Severity
AI hazard

AI system task:
Recognition/object detectionEvent/anomaly detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

China Gives First Approvals for Public Trials of Advanced Autonomous Driving

2024-06-04
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles being tested on public roads. While no harm has been reported yet, the deployment of level three autonomous driving technology on public roads carries plausible risks of harm to people or property if the AI systems malfunction or fail. Therefore, this event represents a credible potential risk of harm stemming from AI use, qualifying it as an AI Hazard rather than an Incident, since no actual harm has been reported yet.
Thumbnail Image

China gives first approvals for public trials of advanced autonomous driving

2024-06-04
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of level three autonomous driving technologies being tested on public roads. Although no harm has yet occurred, the use of such AI systems in real-world driving scenarios could plausibly lead to accidents or other harms, making this an AI Hazard. There is no indication of realized harm or incidents, so it is not an AI Incident. The article is not merely complementary information about AI governance or responses, nor is it unrelated to AI systems.
Thumbnail Image

China Paves the Way for Advanced Autonomous Driving

2024-06-06
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI at advanced levels (L3 and L4). However, it does not describe any actual harm, malfunction, or misuse of these AI systems. Instead, it discusses the approval process and the potential for future deployment and testing. Since no harm has occurred yet, but the event involves the development and use of AI systems that could plausibly lead to harm in the future (e.g., accidents or safety issues during testing), this qualifies as an AI Hazard. The article does not report any incident or harm, so it is not an AI Incident. It is not merely complementary information because the focus is on the regulatory approval enabling testing, which implies plausible future risks inherent in deploying autonomous vehicles on public roads.
Thumbnail Image

China Gives First Approvals for Public Trials of Advanced Autonomous Driving - News18

2024-06-04
News18
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely level three autonomous driving technologies that make real-time decisions affecting vehicle control. The use of these AI systems on public roads inherently carries risks of injury, property damage, or disruption if the AI malfunctions or fails to respond appropriately. Since the article describes the start of public trials without any reported accidents or harms, it does not meet the criteria for an AI Incident. However, the plausible future harm from these tests justifies classification as an AI Hazard. The article does not focus on responses to past incidents or broader governance developments, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Seeking to drive new economic growth, China accelerates autonomous vehicle trial

2024-06-05
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (level three autonomous driving) being tested, which fits the definition of an AI system. The article discusses the development and use of these AI systems in a controlled pilot program. There is no mention of any injury, disruption, rights violation, or other harm caused by these AI systems so far. The article highlights the potential for future economic growth and the establishment of standards and regulations, implying a future risk but no current incident. Hence, this qualifies as an AI Hazard, as the AI systems could plausibly lead to incidents in the future, but no harm has yet materialized.
Thumbnail Image

China Paves the Way for Advanced Autonomous Driving

2024-06-06
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI at Level 3 and Level 4 autonomy, which make informed driving decisions with limited human intervention. The event is about regulatory approval for testing these AI systems on public roads, which could plausibly lead to AI incidents such as accidents or safety issues in the future. However, no actual harm, injury, or violation has been reported yet. Thus, this qualifies as an AI Hazard because it plausibly could lead to harm but no incident has occurred at this stage. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it directly concerns AI systems and their deployment.
Thumbnail Image

The Zacks Analyst Blog Highlights Tesla, BYD, NIO and XPeng

2024-06-07
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI at advanced levels (L3 and L4). However, it does not report any actual harm, malfunction, or misuse of these AI systems. Instead, it discusses the regulatory approval and testing plans, which could lead to future AI incidents but currently represent a development stage without realized harm. Therefore, this event is best classified as an AI Hazard, as the deployment and testing of autonomous vehicles could plausibly lead to incidents in the future, but no incident has yet occurred.
Thumbnail Image

China gives first approvals for public trials of advanced autonomous driving

2024-06-04
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles (level three autonomy and above) being tested publicly. While no harm has been reported yet, the deployment of such systems in public trials could plausibly lead to incidents involving injury or harm if the AI malfunctions or fails to ensure safety. Therefore, this is an AI Hazard as it describes a credible risk of future harm from AI system use in autonomous driving trials.
Thumbnail Image

China gives first approvals for public trials of advanced autonomous driving

2024-06-04
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The event describes the use of advanced autonomous driving AI systems (level three autonomy) in public road trials, which involve AI systems making driving decisions. While no harm is reported yet, the deployment of such systems on public roads plausibly could lead to incidents involving injury or harm to people if the AI malfunctions or fails. Therefore, this is an AI Hazard, as the trials could plausibly lead to AI Incidents in the future, but no actual harm or incident is described at this stage.
Thumbnail Image

No less than 10 automakers already offer Level 2 driving assistance systems in China

2024-06-06
Carscoops
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of Level 2 and Level 3 autonomous driving technologies and their testing and deployment in China. However, there is no mention of any accidents, malfunctions, or harms caused by these AI systems. The content primarily concerns regulatory approvals and the potential for future use, which could plausibly lead to harm but does not report any actual harm or incidents. Therefore, this event fits the definition of an AI Hazard, as the deployment and testing of autonomous driving systems could plausibly lead to incidents in the future, but no incident has yet occurred.
Thumbnail Image

China gives first approvals for public trials of advanced autonomous driving

2024-06-04
Times LIVE
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles at level three, which implies significant AI decision-making capabilities. While the trials are approved and planned, no harm or incident has been reported yet. However, the deployment of such systems on public roads could plausibly lead to AI incidents such as accidents or safety issues if the AI malfunctions or misjudges situations. Therefore, this event represents a plausible future risk related to AI use in autonomous driving, qualifying it as an AI Hazard rather than an incident or unrelated news.
Thumbnail Image

Chinese car brands start trials for autonomous driving

2024-06-04
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article discusses the start of trials for level three autonomous vehicles, which involve AI systems capable of autonomous driving. However, it does not describe any realized harm, injury, violation of rights, or disruption caused by these AI systems. The mention of investigations into other companies' autonomous vehicles relates to potential safety concerns but does not report confirmed incidents or harms. Therefore, the event is best classified as Complementary Information, as it provides context and updates on the development and regulatory environment of AI-driven autonomous vehicles without reporting an AI Incident or AI Hazard.
Thumbnail Image

China gives nod to public trials of advanced autonomous driving

2024-06-04
BusinessLIVE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (level three autonomous driving) being tested on public roads, which fits the definition of an AI system. However, the article does not report any injury, harm, or violation caused by these systems yet. The trials could plausibly lead to future AI incidents if malfunctions or misuse occur, but currently, it is a planned and approved testing phase without reported harm. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm in the future but no harm has yet occurred.
Thumbnail Image

China gives first approvals for public trials of advanced autonomous driving

2024-06-04
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (level three autonomous driving) on public roads, which directly relates to the development and use of AI systems. Although no harm has been reported yet, the deployment of such systems on public roads could plausibly lead to incidents involving injury, disruption, or other harms due to the AI system's decisions or malfunctions. Therefore, this event represents an AI Hazard, as it describes a credible risk of future harm from the use of advanced autonomous driving AI systems in public trials.
Thumbnail Image

Autonomous driving tests get the green light

2024-06-04
The Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (level 3 autonomous driving) being approved for testing, indicating AI system involvement. However, there is no mention of any harm, malfunction, or misuse resulting from these tests. The event is about regulatory approval and planned testing, which could lead to future risks but does not currently present a plausible immediate hazard or incident. Thus, it fits the definition of Complementary Information as it updates on AI system deployment and governance without describing realized or imminent harm.
Thumbnail Image

China approves testing of level 3 autonomous vehicles on public roads

2024-06-06
thesun.my
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Level 3 autonomous driving systems, which are AI systems capable of controlling vehicles under certain conditions. The event concerns the approval for testing these systems on public roads, which is a development/use phase of AI systems. No actual harm or incidents are reported yet, but the nature of autonomous vehicle testing inherently carries plausible risks of injury or harm to people or property if the AI systems malfunction or fail. Hence, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses regulatory oversight and data security compliance, but these do not indicate realized harm or legal violations at this stage, so it is not Complementary Information. It is not unrelated because the event directly involves AI systems and their deployment with potential safety implications.