Aurora Advances Autonomous Trucking Amid Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Aurora Innovation is expanding its self-driving truck operations in Texas. Following concerns from its manufacturing partner about prototype parts, the company now sometimes places a human observer in the front seat, while also running fully autonomous hauls, highlighting potential AI safety hazards on public highways.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Level 4 autonomous driving system) in commercial trucking. The article highlights the potential for large-scale job losses among truck drivers, which constitutes harm to communities and individuals' livelihoods. Although the harm is not yet realized, the deployment and operation of these AI systems plausibly lead to such harm in the near future. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet materialized.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retail

Affected stakeholders
WorkersGeneral public

Harm types
Physical (injury)Physical (death)Economic/PropertyReputational

Severity
AI hazard

Business function:
LogisticsMonitoring and quality controlResearch and development

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Driverless Semi Trucks Are Coming To Take Jobs From Meat In The Seat Drivers - Jalopnik

2025-05-30
Jalopnik
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Level 4 autonomous driving system) in commercial trucking. The article highlights the potential for large-scale job losses among truck drivers, which constitutes harm to communities and individuals' livelihoods. Although the harm is not yet realized, the deployment and operation of these AI systems plausibly lead to such harm in the near future. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and not yet materialized.
Thumbnail Image

Breakthrough in Autonomous Trucks Doesn't Halt All Safety Concerns About Path Forward

2025-05-28
eWEEK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Level 4 autonomous driving systems) actively used in real-world conditions. Although no actual harm or incident has been reported, the article emphasizes credible safety concerns and the potential for accidents or other harms due to limitations in the AI's ability to handle complex or unforeseen scenarios. The lack of federal oversight further increases the risk. Therefore, this situation fits the definition of an AI Hazard, as the development and use of these autonomous trucks could plausibly lead to an AI Incident involving injury or harm to people or disruption of critical infrastructure.
Thumbnail Image

Yes, that 18-wheeler on a Texas highway is driving itself

2025-05-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology controlling 18-wheelers. Although no actual incident causing harm has been reported, the article extensively discusses the plausible risks and potential for serious accidents due to AI system limitations, sensor failures, and unpredictable road conditions. The concerns from experts, truckers, and safety advocates about inadequate regulation and the technology's ability to handle complex scenarios support the classification as an AI Hazard. There is no indication that harm has already occurred directly or indirectly from the AI system, so it is not an AI Incident. The article is not merely complementary information because the main focus is on the deployment and associated risks, not on responses or updates to past incidents. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Trucking Company Deploys Self-Driving 18-Wheeler Where Human Employee Can Chill Out and Watch YouTube Videos in the Back as It Bombs Down the Highway

2025-05-27
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Level 4 autonomous driving technology) used in real-world trucking operations. The AI system's use has directly led to a situation where human supervisors were initially removed and then reinstated due to safety concerns, indicating a malfunction or limitation in the AI's performance. Although no actual harm (accidents or injuries) is reported, the described risks and expert warnings about unpredictable behavior and safety hazards on highways constitute a plausible risk of harm to people (harm to health) and disruption to critical infrastructure (road safety). Therefore, this qualifies as an AI Hazard because the AI system's deployment could plausibly lead to an AI Incident involving injury or harm, but no realized harm is documented yet. The presence of credible safety concerns and operational changes due to AI limitations supports this classification over Complementary Information or Unrelated.
Thumbnail Image

Aurora Puts Human Back Behind Wheel of Its Autonomous Trucks

2025-05-27
supplychain247.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous driving system in trucks) in active use. However, there is no indication of any injury, harm, or violation resulting from the AI system's operation. The human observer is added as a safety precaution due to hardware concerns, not because of a malfunction or harm caused by the AI. Therefore, this is not an AI Incident. It also does not describe a plausible future harm beyond normal operational caution, so it is not an AI Hazard. The article mainly provides an update on the operational status and safety measures of an AI system, which fits the definition of Complementary Information.
Thumbnail Image

Driverless trucks now hauling cargo through Texas in first for US

2025-05-27
dpa International
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Level 4 autonomous trucks) actively used in freight hauling, which fits the definition of an AI system. The article does not describe any realized harm or incidents caused by the AI system, so it is not an AI Incident. However, the deployment of autonomous trucks without safety drivers on public roads plausibly could lead to harm such as accidents or disruptions, making it an AI Hazard. The article focuses on the start of this service and its capabilities, with no indication of harm or legal issues yet. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Aurora's self-driving 18-wheelers have already logged 1,200 miles on...

2025-05-28
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving technology) in a critical infrastructure domain (freight transport on public highways). Although no injury, accident, or harm has been reported, the concerns about safety, regulatory gaps, and the early stage of deployment suggest a credible risk that the AI system could plausibly lead to harm in the future. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impacts.
Thumbnail Image

The Long Haul To Autonomy Is Getting Started As Leaders Show The Way

2025-05-30
Forbes
Why's our monitor labelling this an incident or hazard?
The article focuses on the development, testing, and planned deployment of autonomous trucks using AI systems for long-haul freight. While it discusses the technology's progress and some challenges, it does not describe any event where the AI system's use or malfunction has directly or indirectly caused harm or legal violations. The mention of a human observer being installed in Aurora trucks after initial driverless operation is noted as a minor operational adjustment rather than an incident causing harm. The article is primarily informative about the AI ecosystem's evolution in autonomous trucking, without reporting any realized or imminent harm. Thus, it fits the category of Complementary Information, providing context and updates on AI system development and deployment without constituting an AI Incident or AI Hazard.
Thumbnail Image

Driverless semi trucks are here, with little regulation and big promises - The Boston Globe

2025-05-27
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous driving technology in semi trucks. It reports on their current use and the concerns raised by experts, drivers, and labor groups about safety risks and inadequate regulation. Although no actual accidents or injuries caused by these driverless trucks are reported, the described scenarios and expert opinions indicate credible risks of harm, such as accidents in bad weather or unexpected traffic conditions. The potential for catastrophic errors and the lack of comprehensive federal regulation support classifying this as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI harms.
Thumbnail Image

Yes, That 18-Wheeler on a Texas Highway Is Driving Itself

2025-05-28
GV Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology operating heavy trucks without drivers. Although the trucks have logged significant driverless miles, no incidents of injury, accidents, or other harms caused by these AI systems are reported. However, multiple experts and stakeholders express concerns about the potential for catastrophic errors, sensor failures in bad weather, and unpredictable traffic conditions that could lead to accidents or injuries. The lack of comprehensive federal regulation and the rapid deployment of this technology further increase the risk. Since the article focuses on the plausible future risks and the current use of AI systems without documented harm, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not primarily about responses or updates to a past incident, nor is it Unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Yes, That 18-Wheeler on a Texas Highway Is Driving Itself

2025-05-27
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology operating 18-wheelers without drivers. It discusses the use of these AI systems on highways and the potential for malfunction or failure to respond appropriately in complex or adverse conditions, which could lead to injury or accidents. Although no actual incident causing harm has been reported, the credible concerns and expert warnings about safety risks and inadequate regulation indicate a plausible risk of future harm. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential risks.
Thumbnail Image

Driverless Trucking Firm Aurora Puts Human Back in Driver's Seat

2025-05-16
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Aurora Driver autonomous system) in commercial trucking. The decision to reintroduce a human driver indicates concerns about the AI system's safe operation or reliability. However, there is no indication that any harm (injury, property damage, rights violation, or community harm) has occurred. The presence of a human driver as an intervention measure reduces the risk of harm. Therefore, this event represents a plausible risk scenario where the AI system's use could have led to harm but has not yet done so. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as the main focus is on the potential for harm and operational safety rather than a response to a past incident or a general update.
Thumbnail Image

Driverless trucks in Texas will have drivers once again

2025-05-19
Chron
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (self-driving trucks) whose deployment is being adjusted due to safety concerns, indicating a plausible risk of harm if fully autonomous operation proceeds without a backup driver. No actual harm or incident has occurred, but the potential for harm exists, making this an AI Hazard. The company's decision to include a backup driver is a mitigation measure acknowledging this risk. Hence, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Driverless Trucking Firm Aurora Puts Human Back in Driver's Seat

2025-05-16
mint
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (Aurora Driver autonomous system) in commercial trucking, which is an AI system by definition. However, the event is about the company reversing its fully driverless operation to include a human driver for safety and partner assurance. There is no reported harm or incident caused by the AI system, nor is there a plausible imminent risk of harm detailed. The change is a response to partner concerns and prototype issues, indicating a governance or operational update rather than an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context on the deployment and operational decisions around an AI system without describing an incident or hazard.
Thumbnail Image

Aurora adds human driver back to Texas truck route

2025-05-19
The Dallas Morning News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the Aurora Driver autonomous system operating commercial trucks. However, the event centers on the decision to add a human driver back into the vehicle to monitor and intervene if necessary, at the request of the truck manufacturer. There is no report of any injury, accident, rights violation, or other harm caused by the AI system. The change is a precautionary safety measure rather than a response to an incident or a credible imminent hazard. The article also mentions internal disagreements and executive departures but does not link these to harm caused by the AI system. Thus, the event is best categorized as Complementary Information, providing context and updates on AI deployment and safety practices rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Ossa Fisher Is Leading Aurora Innovation's Autonomous Trucking Revolution

2025-05-19
D Magazine
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving technology) in commercial trucking, which is explicitly described. However, the article does not report any realized harm or incidents resulting from the AI system's use. It discusses the potential benefits and the company's efforts to prove the technology's safety and readiness for scale. There is no indication of an accident, malfunction, or violation of rights caused by the AI system. Nor does it present a credible imminent risk of harm. Therefore, the article is best classified as Complementary Information, providing context and updates on AI deployment and governance in autonomous trucking.
Thumbnail Image

Aurora Reverses Course, Puts Human Back in Driver's Seat

2025-05-16
Transport Topics
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Aurora Driver autonomous system) used in commercial driverless trucks. However, the article does not describe any realized harm or incident caused by the AI system. Instead, it reports a precautionary operational change to include a human driver to mitigate potential risks during early deployment. There is no indication of injury, rights violations, property damage, or other harms. Therefore, this is not an AI Incident. Nor does it describe a specific plausible future harm event beyond the general caution already addressed by the human intervention. It is not primarily about governance or societal response but an operational update. Hence, it fits best as Complementary Information, providing context on the evolving deployment and safety measures of an AI system in a real-world setting.
Thumbnail Image

Aurora Rolls Into the Future: Launches Nation's First Commercial Self-Driving Trucking Service

2025-05-17
impactlab.com
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and active use of an AI system (Aurora Driver) for autonomous trucking, which qualifies as an AI system. There is no mention of any actual harm, accident, or malfunction caused by the AI system so far. However, the operation of autonomous trucks on public roads inherently carries credible risks of injury, property damage, or disruption, which could plausibly lead to an AI Incident in the future. Hence, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event.