Tesla Cybertruck FSD Malfunction Leads to Crash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Florida-based software developer Jonathan Challinger experienced a crash when his Tesla Cybertruck's Full Self-Driving system (v13 and v13.2.4) failed to merge or turn, resulting in collisions with a curb and a pole. He shared the incident on social media and warned users to remain vigilant while using advanced driving features.[AI generated]

Why's our monitor labelling this an incident or hazard?

This is a direct harm incident caused by the AI system’s malfunction. Tesla’s FSD, an autonomous driving system, failed to detect or respond to a lane ending, leading to property damage and potential risk to pedestrians. Under the framework, a realized harm stemming from an AI system’s malfunction qualifies as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)Economic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation

In other databases

Articles about this incident or hazard

Thumbnail Image

Tesla Cybertruck Drives Itself Into a Pole, Owner Says 'Thank You Tesla'

2025-02-11
The Drive
Why's our monitor labelling this an incident or hazard?
This is a direct harm incident caused by the AI system’s malfunction. Tesla’s FSD, an autonomous driving system, failed to detect or respond to a lane ending, leading to property damage and potential risk to pedestrians. Under the framework, a realized harm stemming from an AI system’s malfunction qualifies as an AI Incident.
Thumbnail Image

Tesla Cybertruck Crashes Into Light Pole While Using FSD 13.2.4

2025-02-11
CleanTechnica
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD is an AI perception and planning system. Its failure to recognize the lane ending and the light pole led directly to the crash, causing property damage and a potential safety hazard. This constitutes an AI Incident because the AI system’s malfunction directly resulted in harm.
Thumbnail Image

"Big fail on my part..." Tesla Cybertruck owner's terrifying crash puts Full Self-Driving tech under scrutiny: Will Musk respond? | - The Times of India

2025-02-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The event describes a real-world malfunction of Tesla’s FSD AI system, which directly led to a collision and posed serious harm to the driver and property. This constitutes an AI Incident, as the AI system’s erroneous behavior caused (or nearly caused) physical harm and property damage.
Thumbnail Image

Cybertruck crash raises alarm bells about Tesla's self-driving software

2025-02-14
The Hindu
Why's our monitor labelling this an incident or hazard?
The incident stems from the real-world use of Tesla’s AI-driven FSD software which failed to correctly interpret and respond to a lane ending, causing the vehicle to crash. The malfunction of the AI system is the direct cause of the harm (crash), satisfying the criteria for an AI Incident.
Thumbnail Image

Cybertruck Crash Raises Alarm Bells About Tesla's Self-Driving Software

2025-02-13
US News & World Report
Why's our monitor labelling this an incident or hazard?
The crash directly resulted from the use and malfunction of an AI system (Tesla’s FSD). The vehicle’s inability to detect a lane ending and execute a safe merge under the AI’s control constitutes an AI incident, as harm (vehicle damage and risk of injury) materialized due to the system’s failure.
Thumbnail Image

Tesla Cybertruck Crash Raises Safety Concerns Over Full Self-Driving Technology - EconoTimes

2025-02-14
EconoTimes
Why's our monitor labelling this an incident or hazard?
The incident involves the use and malfunction of Tesla’s FSD AI system, which directly led to a vehicle crash (property damage and potential risk to occupants). This is a realized harm caused by an AI system’s decision‐making failure, fitting the definition of an AI Incident.
Thumbnail Image

Tesla Cybertruck crash into a pole in Nevada was in self-driving mode: owner

2025-02-15
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves a deployed AI system (Tesla’s FSD) whose malfunction (failure to merge out of an ending lane) directly resulted in a crash. This constitutes realized harm—property damage and potential risk of personal injury—stemming from the AI’s failure to perform its intended safety function. Thus, it meets the criteria for an AI Incident.
Thumbnail Image

Tesla Cybertruck crash into a pole in Nevada was in self-driving mode: owner

2025-02-15
Aol
Why's our monitor labelling this an incident or hazard?
The crash directly resulted from the malfunction of Tesla’s FSD system—an AI-based advanced driver assistance system—while in self-driving mode. This malfunction caused tangible harm (vehicle and infrastructure damage and potential injury risk), meeting the definition of an AI Incident.
Thumbnail Image

Tesla Cybertruck's self-driving mode fails, causing crash

2025-02-11
autotechinsight.ihsmarkit.com
Why's our monitor labelling this an incident or hazard?
The crash resulted directly from a malfunction of an AI system (Tesla’s FSD). The AI failed to perform its driving task, leading to property damage and posing safety risks. This constitutes an AI incident because the self-driving AI’s failure directly caused the harm.
Thumbnail Image

Tesla Cybertruck crashes into pole while using latest Full Self-Driving software

2025-02-11
TechSpot
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD software (an AI system) directly failed to detect lane endings and objects, causing the Cybertruck to hit a curb and then a pole. This malfunction led to realized harm (vehicle damage and potential injury), fitting the definition of an AI Incident.
Thumbnail Image

Cybertruck owner warns others avoid his 'mistake' after terrifying crash while using self-driving feature

2025-02-11
UNILAD
Why's our monitor labelling this an incident or hazard?
The incident involves a malfunction of an AI-driven self-driving system (Tesla’s FSD) that directly led to a crash. Although no injuries occurred, the AI failure caused property damage and posed safety risks, qualifying it as an AI Incident.
Thumbnail Image

Tesla Cybertruck owner's terrifying crash sparks full self-driving feature concerns: Will Elon Musk respond?

2025-02-11
The Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system (Tesla’s Full Self-Driving feature) malfunctioned during operation, directly leading to a collision. The incident resulted in property damage and posed a risk of personal injury. This constitutes an AI Incident, as the self-driving AI’s failure to detect and merge out of an ending lane caused actual harm.
Thumbnail Image

Terrifying Footage Shows Cybertruck on Self-Driving Mode Swerve Into Oncoming Traffic

2025-02-12
Futurism
Why's our monitor labelling this an incident or hazard?
The event describes a real-world use of Tesla’s FSD AI system that malfunctioned by attempting to turn into oncoming traffic, creating immediate physical harm risk. This direct involvement of an AI system causing hazardous behavior and near-crash conditions meets the criteria for an AI Incident.
Thumbnail Image

Tesla's FSD Almost Crashes Cybertruck Into Oncoming Traffic As Musk Plans Robotaxis | Carscoops

2025-02-13
Carscoops
Why's our monitor labelling this an incident or hazard?
This is a case where an AI system (Tesla’s FSD) malfunctioned in real time, creating a clear risk of harm (a near side-impact collision). Because the human driver was able to intervene and no actual harm occurred, it constitutes a demonstrated hazard rather than a realized incident.
Thumbnail Image

A Cybertruck on Autopilot slammed into a light pole, and it went viral

2025-02-13
The Olympian
Why's our monitor labelling this an incident or hazard?
The crash occurred because Tesla’s AI driving assistance (Full Self-Driving/autopilot) misinterpreted lane geometry and failed to slow down or change lanes, directly causing the collision. This qualifies as an AI Incident: an AI system malfunction during use led to real-world harm (vehicle destruction and potential injury).
Thumbnail Image

Cybertruck crash raises alarm bells about Tesla's self-driving software

2025-02-13
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The crash directly resulted from the Full Self-Driving feature’s failure to handle a lane ending and merge, leading to physical harm (property damage and potential injury). This qualifies as an AI Incident because an AI system’s malfunction caused tangible harm.
Thumbnail Image

Tesla Cybertruck in self-driving mode crashes into pole near Reno school; X post goes viral

2025-02-13
Reno Gazette Journal
Why's our monitor labelling this an incident or hazard?
This event involves an AI system (Tesla’s self-driving mode) malfunctioning during use, directly leading to property damage. The harm has occurred and stems from the AI system’s failure to perform its driving task, fitting the definition of an AI Incident.
Thumbnail Image

Tesla's Self-Driving Cybertruck Crashes, Testing Faith In FSD Tech

2025-02-14
Finimize
Why's our monitor labelling this an incident or hazard?
The crash directly stems from the malfunction of Tesla’s Full Self-Driving AI system while in active use, resulting in property damage and potential risk to occupants and others. This constitutes an AI Incident since the AI system’s failure caused harm and undermines trust in autonomous driving.
Thumbnail Image

The terrifying experience of a Tesla Driver highlights the flaws of autonomous driving - Softonic

2025-02-14
Softonic
Why's our monitor labelling this an incident or hazard?
Tesla’s FSD is an AI-driven autonomous driving system. In this case, its failure to execute a basic lane change maneuver resulted in a collision, demonstrating a direct harmful outcome from the AI’s malfunction during use. The harm is realized and stems from the AI system’s behavior, classifying it as an AI Incident.
Thumbnail Image

Cybertruck Driver Shares Message for Musk After Crashing in Self-Driving Mode: 'Help Save Others From the Same Fate or Far Worse'

2025-02-14
International Business Times
Why's our monitor labelling this an incident or hazard?
The article describes a realized harm resulting directly from the malfunction of Tesla’s FSD feature—an AI system—leading to a vehicle crash. This meets the definition of an AI Incident, as the AI system’s failure during operation caused physical harm (a collision).
Thumbnail Image

Cybertruck crash raises alarm bells about Tesla's self-driving software

2025-02-14
ThePrint
Why's our monitor labelling this an incident or hazard?
The incident involves actual harm (a crash) tied to the use of Tesla’s advanced driver-assistance AI. The article centers on the system’s safety failures, expert warnings about its readiness, and prior investigations into similar crashes, all of which point to an AI system malfunction leading to real-world harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

Tesla Cybertruck lao vào cột đèn khi tự lái

2025-02-11
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The event describes a real-world malfunction of Tesla’s advanced driver-assistance AI system (FSD), where the system did not detect road markings or obstacles, directly causing a collision. This meets the criteria for an AI Incident because the AI’s failure in use led to tangible harm (property damage and potential personal injury).
Thumbnail Image

Xe điện Tesla Cybertruck gặp tai nạn vì lỗi phần mềm tự lái?

2025-02-12
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The incident involves Tesla’s FSD system—a bona fide AI system—malfunctioning during use, directly leading to a collision with a pole and property damage. This meets the definition of an AI Incident as the AI system’s malfunction caused harm.
Thumbnail Image

Cổ phiếu Tesla lao dốc khi BYD hợp tác với DeepSeek

2025-02-12
BAO DIEN TU VTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous driving technologies being developed and deployed by BYD and Tesla. However, there is no indication of any injury, rights violation, infrastructure disruption, or other harms caused by these AI systems. The stock price drop is a market reaction, not a harm caused by AI malfunction or misuse. The article focuses on competitive dynamics and strategic developments, which enrich understanding of the AI ecosystem but do not report an incident or hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Tesla robotaxis by June? Musk turns to Texas for hands-off regulation - ET CIO

2025-02-10
ETCIO.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Autopilot and Full Self-Driving) in autonomous vehicles. While no specific harm has been reported yet, the deployment of truly driverless taxis carries plausible risks of accidents and liability issues, which could lead to injury or harm. Therefore, this situation represents an AI Hazard due to the credible potential for harm from the AI system's use in driverless taxis.
Thumbnail Image

Tesla robotaxis by June? Musk turns to Texas for hands-off regulation - ET Auto

2025-02-11
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The article centers on Tesla's intention to launch fully driverless robotaxis in Texas, a state with minimal regulation for autonomous vehicles, which could plausibly lead to safety incidents or legal liabilities. While it references past near-misses and complaints involving other autonomous vehicles, it does not report any actual harm caused by Tesla's system so far. Therefore, the event represents a credible risk of future harm stemming from the use of an AI system (autonomous driving AI) but does not describe a realized harm or incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Tesla Cybertruck Crashes Into Pole While Running 'Full Self-Driving' Software

2025-02-10
Yahoo
Why's our monitor labelling this an incident or hazard?
The Tesla Cybertruck was operating with Tesla's Full Self-Driving software, an AI system designed to control vehicle navigation and driving tasks. The crash occurred because the AI system failed to merge lanes and slow down, directly causing the collision with a pole. This is a clear case where the AI system's malfunction led to harm to property and potential injury. The driver's admitted complacency and the system's failure to prevent the crash despite warnings further support the AI system's role in the incident. The ongoing NHTSA investigation into Tesla FSD crashes reinforces the classification as an AI Incident rather than a hazard or complementary information. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's malfunction during use.
Thumbnail Image

Tesla Cybertruck Crashes Into Pole While Running 'Full Self-Driving' Software

2025-02-10
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The Tesla Cybertruck was operating with Tesla's Full Self-Driving software, an AI system designed to autonomously control the vehicle. The crash occurred because the AI system failed to perform a lane merge and did not slow down, directly causing the collision with a pole. This resulted in harm to property (vehicle damage and infrastructure) and posed a risk to the driver's safety. The driver's admitted complacency and the system's failure to prevent the crash indicate a malfunction or misuse of the AI system. Given the realized harm and direct involvement of the AI system, this event meets the criteria for an AI Incident.
Thumbnail Image

Tesla Cybertruck Crashes Into Pole While Running Latest Version of FSD Software

2025-02-10
PC Magazine
Why's our monitor labelling this an incident or hazard?
The Tesla FSD software is an AI system designed to autonomously control driving functions. The crash was directly caused by the AI system's failure to merge lanes and avoid collision, which constitutes a malfunction or failure in the AI system's operation. The physical damage to the vehicle and potential risk to the driver and others qualifies as injury or harm to persons or property. Therefore, this event meets the criteria for an AI Incident because the AI system's malfunction directly led to harm (vehicle damage and risk to driver safety).
Thumbnail Image

A Tesla Cybertruck crashes with FSD active: what happened

2025-02-10
Motor1.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, Tesla's Full Self-Driving software, which malfunctioned by veering the vehicle into the wrong lane and causing a crash. This malfunction directly led to harm in the form of property damage (vehicle and lamppost). Although no injuries occurred, the harm to property qualifies this as an AI Incident under the framework. The AI system's use and malfunction are central to the event, and the harm is realized, not just potential.
Thumbnail Image

Tesla Delivery in China Sets Record in Q4 as AI Train Challenge Confronts FSD Rollout Plan in 2025-钛媒体官方网站

2025-02-08
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's use of AI for FSD development and the challenges faced in training AI models due to data transfer restrictions between China and the US. However, it does not describe any actual harm, malfunction, or misuse of the AI system leading to injury, rights violations, or other harms. The discussion centers on ongoing development, regulatory challenges, and future plans, which fits the definition of Complementary Information. There is no direct or indirect harm reported, nor a plausible imminent hazard causing harm. Hence, it is not an AI Incident or AI Hazard but rather an update on AI deployment and governance challenges.
Thumbnail Image

Tesla Cybertruck self-drive mode goes rogue: Crashes into pole

2025-02-10
The Financial Express
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system as it performs autonomous driving tasks including navigation and decision-making. The crash was directly caused by the AI system's failure to correctly interpret the driving environment and execute the required turn, resulting in property damage. Although no injuries occurred, the harm to property qualifies as an AI Incident. The owner's admission of overreliance on the system and the system's failure to act correctly confirm the AI system's role in the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Tesla Cybertruck crash on Full Self-Driving v13 goes viral

2025-02-09
Electrek
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system designed to autonomously assist driving. The crash was caused by the AI system's failure to detect and respond appropriately to a lane ending and merging scenario, which is a malfunction during use. The incident directly led to a collision with a pole, posing injury risk to the driver. The event fits the definition of an AI Incident because the AI system's malfunction directly led to harm (or near harm) to a person. The report also discusses the risk of complacency and misleading claims about the system's capabilities, reinforcing the harm caused by reliance on the AI system.
Thumbnail Image

Elon Musk is about to masterfully move the goalpost on Tesla Full Self-Driving

2025-02-10
Electrek
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Tesla's FSD) and discusses its development and use, including teleoperation and geo-fencing, which are AI-related. However, it does not report any actual harm or incident resulting from the AI system's malfunction or misuse. The concerns raised are about unmet promises and marketing strategies rather than realized or imminent harm. The article provides valuable context and critique about the AI system's deployment and public communication, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Cybertruck Using Tesla's So-Called 'Full Self-Driving' Assistance Software Crashes Into Pole

2025-02-10
Jalopnik
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Tesla's FSD software, an AI system designed for autonomous driving assistance. The crash occurred because the AI failed to merge out of a lane that was ending and did not slow down or turn in time, directly causing the collision. This is a malfunction of the AI system during its use, leading to harm (vehicle damage and potential injury). The involvement of the AI system is clear and direct, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Tesla Cybertruck Allegedly On FSD Drives Itself Into A Light Pole | Carscoops

2025-02-10
Carscoops
Why's our monitor labelling this an incident or hazard?
The Tesla Cybertruck was operating with Tesla's Full Self-Driving (FSD) software, an AI system designed for autonomous driving. The crash occurred because the AI failed to recognize a lane merge, leading to the vehicle driving onto a curb and hitting a light pole, causing significant property damage. Although the driver admits fault for losing focus, the AI system's malfunction was a direct contributing factor. This meets the criteria for an AI Incident as the AI system's malfunction directly led to harm (property damage) and risk to the driver, fulfilling the definition of an AI Incident under the framework.
Thumbnail Image

Tesla Driver Issues Warning After His Cybertruck Totals Itself on "Full Self-Driving" Mode

2025-02-10
Futurism
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed to assist or automate driving. The crash occurred due to the AI system's failure to act appropriately (malfunction) in a critical driving situation, directly causing harm to property and posing risk to human safety. The event is not merely a potential hazard but a realized incident with tangible harm. The driver's own account and the broader context of regulatory scrutiny and prior crashes linked to Tesla's FSD confirm the AI system's role in the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Tesla Cybertruck owner still loves his pickup, even after it drove him into a pole

2025-02-11
Perth Now
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system designed to control acceleration, braking, and steering. The incident involved the AI system's failure to act appropriately in a lane-ending scenario, directly causing the vehicle to crash into physical objects. This constitutes a malfunction of the AI system leading to harm to property (the vehicle and pole) and potential risk to the driver. Although no physical injury occurred, the event meets the criteria for an AI Incident because the AI system's malfunction directly led to harm (property damage and risk to the driver).
Thumbnail Image

Most Tesla drivers won't get self-driving without a hardware upgrade

2025-02-10
Bradenton Herald
Why's our monitor labelling this an incident or hazard?
The article primarily provides an update and analysis on Tesla's FSD system development, hardware requirements, and the challenges faced, including past misuse and safety concerns. While it references past harms related to Autopilot misuse, it does not describe a new AI Incident or AI Hazard event. The focus is on the company's statements and the technological and safety context, which fits the definition of Complementary Information as it enhances understanding of the AI system's ecosystem and ongoing developments without reporting a new harm or plausible future harm event.
Thumbnail Image

Sleep-deprived driver grateful after Tesla prevents scary highway scenario: '[It] probably saved my life today'

2025-02-09
The Cool Down
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system that performs autonomous driving tasks, including lane keeping and hazard detection. The incident involved the use of this AI system, which directly prevented harm by alerting the driver and maintaining control of the vehicle during a dangerous moment when the driver was asleep. This fits the definition of an AI Incident because the AI system's use directly led to the prevention of injury or harm to a person. Although no injury occurred, the AI system's role was pivotal in averting a potentially serious accident, which qualifies as an AI Incident under the framework.
Thumbnail Image

Tesla Cybertruck crashes into a pole with active FSD v13

2025-02-10
ArenaEV.com
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The crash was caused by the AI system's failure to perform the expected maneuver (slowing down and lane changing), leading to a collision with a pole. This is a direct harm to property and potentially to persons, fulfilling the criteria for an AI Incident. The event involves the use and malfunction of the AI system, and the harm has already occurred. Hence, it is not merely a hazard or complementary information but an AI Incident.