AI-Driven Autonomous Trucks Tested on U.S. Highways Raise Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Aurora Innovation and other companies are testing AI-powered driverless semi trucks on Texas highways, with plans for wider deployment by 2027. Incidents like phantom braking and industry concerns have led to pauses and the reintroduction of human operators, highlighting potential risks but no reported harm yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The presence of AI systems is explicit, as the trucks use autonomous AI driving systems. The article mentions safety concerns and incidents like phantom braking and lawsuits, indicating malfunction or problematic use. However, no specific harm event (injury, property damage, or rights violation) is reported as having occurred. The article highlights potential future risks and ongoing testing, which fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the current testing and associated risks, not on responses or governance. It is not unrelated because AI systems are central to the event.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Severity
AI hazard

Business function:
Logistics

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Driverless Big Rigs Are Coming to American Highways, and Soon

2026-03-17
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology for big rigs, which qualifies as an AI system. However, there is no mention of any harm, accident, malfunction, or violation resulting from the use or development of these AI systems. The pause requested by the truck manufacturer to have a human operator onboard suggests caution but does not indicate an incident or hazard. The article focuses on the ongoing testing and potential market entry of autonomous trucks, which is informative but does not describe realized or imminent harm. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Driverless semis are coming to your road and already being tested in Texas

2026-03-17
The Independent
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, as the trucks use autonomous AI driving systems. The article mentions safety concerns and incidents like phantom braking and lawsuits, indicating malfunction or problematic use. However, no specific harm event (injury, property damage, or rights violation) is reported as having occurred. The article highlights potential future risks and ongoing testing, which fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the current testing and associated risks, not on responses or governance. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Self-Driving Semi Trucks Are Coming, and They're About to Transform a $900 Billion Industry

2026-03-17
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous trucks) and their planned use, but it does not describe any harm or plausible harm that has occurred or is imminent. The focus is on the business and technological development and deployment plans, which fits the definition of Complementary Information. There is no indication of direct or indirect harm, nor credible warnings of future harm, so it is not an AI Incident or AI Hazard.
Thumbnail Image

Stockwatch

2026-03-18
Stockwatch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous driving software being integrated into trucks for commercial use. Although the trucks are not yet deployed and no harm has been reported, the nature of autonomous vehicles inherently carries risks that could lead to injury, disruption, or other harms. This fits the definition of an AI Hazard, as the development and planned use of these AI systems could plausibly lead to an AI Incident in the future.
Thumbnail Image

Driverless Big Rigs Are Coming to American Highways, and Soon

2026-03-17
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in autonomous trucking operations, which are currently being tested and partially deployed. It discusses the malfunction known as phantom braking, which could plausibly lead to accidents and severe harm given the size and weight of trucks. However, no actual accidents or injuries have been reported. The presence of safety drivers and operators further indicates that harm has not yet materialized. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred.