California Moves to Lift Ban on Self-Driving Truck Testing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

California regulators are proposing revised rules to allow the testing and deployment of self-driving trucks on public highways, ending a ban on autonomous vehicles over 10,000 pounds. The new regulations address safety, enforcement, and operational requirements, raising potential future risks but reporting no current AI-related incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in the form of autonomous trucks, which are currently banned but may soon be allowed under revised regulations. The discussion centers on the potential for these AI systems to operate on public roads, with safety and operational requirements outlined. No actual harm or incidents are reported; rather, the article highlights the regulatory environment and the possibility of future deployment. Given the potential for AI system malfunction or misuse in autonomous trucks to cause harm, the event plausibly leads to an AI Incident in the future, qualifying it as an AI Hazard. It is not Complementary Information because it is not an update on a past incident, nor is it unrelated as it directly concerns AI systems and their regulation.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retail

Severity
AI hazard

Business function:
Logistics

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

California's ban on self-driving trucks could soon be over | TechCrunch

2025-12-04
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous trucks, which are currently banned but may soon be allowed under revised regulations. The discussion centers on the potential for these AI systems to operate on public roads, with safety and operational requirements outlined. No actual harm or incidents are reported; rather, the article highlights the regulatory environment and the possibility of future deployment. Given the potential for AI system malfunction or misuse in autonomous trucks to cause harm, the event plausibly leads to an AI Incident in the future, qualifying it as an AI Hazard. It is not Complementary Information because it is not an update on a past incident, nor is it unrelated as it directly concerns AI systems and their regulation.
Thumbnail Image

Driverless trucks could soon be headed to California highways

2025-12-03
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (driverless trucks) and their use is under regulatory consideration. While the deployment of autonomous trucks could plausibly lead to AI incidents (e.g., accidents, traffic violations), the article only describes proposed regulations and the testing program, with no current harm or incident reported. Therefore, this is best classified as an AI Hazard, reflecting the plausible future risk associated with the introduction of AI-driven trucks on public roads.
Thumbnail Image

DMV Now Hammering Out Rules for Self-Driving Trucks to Come to California

2025-12-04
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (self-driving trucks) and their development and use, but no harm has yet occurred. The article describes the creation of rules to enable safe testing and oversight, aiming to prevent future incidents. Therefore, this is a plausible future risk scenario where AI systems could lead to harm if unregulated, but currently no incident or harm has materialized. This fits the definition of an AI Hazard, as the regulations address potential risks of autonomous trucks operating on public roads.
Thumbnail Image

California DMV Proposes Lifting Ban on Self-Driving Truck Tests

2025-12-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (self-driving trucks) whose development and use are central to the narrative. While the article discusses concerns about safety risks, labor displacement, and enforcement challenges, no actual harm or incident caused by these AI systems is reported. The DMV's proposal is a regulatory step toward allowing testing, implying potential future risks but no current realized harm. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to incidents involving injury, disruption, or rights violations once testing and deployment occur.
Thumbnail Image

California's ban on self-driving trucks could soon be over - RocketNews

2025-12-04
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (self-driving trucks) and their use, but no direct or indirect harm has occurred yet. The article centers on regulatory changes enabling future use, which could plausibly lead to AI incidents, but currently represents a potential risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the development and deployment of autonomous trucks on public highways could plausibly lead to harms such as accidents or disruptions, but no such harms are reported at this time.
Thumbnail Image

California DMV Clears Path for Driverless Truck Trials on Highways

2025-12-05
Bangla news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous trucks, which are explicitly mentioned. The proposed regulations aim to enable their testing and deployment, which could plausibly lead to incidents involving injury or harm to people or disruption of critical infrastructure if safety issues arise. Since no harm has yet occurred, but the potential for harm is credible and recognized by stakeholders, this qualifies as an AI Hazard rather than an AI Incident. The political and safety concerns further support the plausibility of future harm.