Tesla FSD Fails to Detect Trains at Railroad Crossings, Prompting U.S. Safety Investigation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla's Full Self-Driving (FSD) AI system has repeatedly failed to detect trains at railroad crossings, leading to near-accidents and multiple user complaints. Video evidence and reports from several drivers prompted the U.S. National Highway Traffic Safety Administration (NHTSA) to launch an investigation into potential safety defects in the system.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Tesla's Full Self-Driving software) malfunctioning by failing to recognize critical safety signals at railroad crossings. This malfunction has directly led to at least one collision with a train and multiple near-miss incidents, indicating realized harm or significant risk to human safety. Therefore, this qualifies as an AI Incident due to injury or harm to persons resulting from the AI system's malfunction during use.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Tesla's 'Full Self-Driving' May Fail To Stop At Railroad Crossings - Jalopnik

2025-09-17
Jalopnik
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving software) malfunctioning by failing to recognize critical safety signals at railroad crossings. This malfunction has directly led to at least one collision with a train and multiple near-miss incidents, indicating realized harm or significant risk to human safety. Therefore, this qualifies as an AI Incident due to injury or harm to persons resulting from the AI system's malfunction during use.
Thumbnail Image

Tesla's Full-Self Driving Under Fire After Failing To Recognize Train Crossings | Carscoops

2025-09-17
Carscoops
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Tesla's Full-Self Driving system, which is a semi-autonomous AI driving system. The system's failure to recognize railroad crossings and respond appropriately has directly led to at least one reported accident involving a train collision, which constitutes harm to persons and property. The involvement of the AI system in the development and use phases is clear, and the harm is realized, not just potential. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla FSD Can Kill People at Railroad Crossings, and the NHTSA Is Aware of the Problem

2025-09-17
autoevolution
Why's our monitor labelling this an incident or hazard?
The Tesla FSD software is an AI system designed for autonomous driving. The reported failures at railroad crossings have directly caused harm or near harm to vehicle occupants and potentially others, fulfilling the criteria for an AI Incident. The incidents include a collision with a train and multiple near misses where the AI system ignored active crossing signals. The involvement of the AI system's malfunction in these events is explicit and central to the harm. The NHTSA's awareness and communication with Tesla further confirm the seriousness of the issue. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction during its use.
Thumbnail Image

Tesla owners report dangerous full self-driving glitches at railway crossings

2025-09-18
mid-day
Why's our monitor labelling this an incident or hazard?
The event involves Tesla's FSD software, an AI system designed for autonomous driving. The reported failures to recognize critical safety signals at railroad crossings have directly caused hazardous situations requiring emergency braking to avoid collisions, indicating a malfunction of the AI system. This meets the criteria for an AI Incident as it involves harm or risk of harm to persons due to the AI system's malfunction during its use.
Thumbnail Image

Drivers Say Self-Driving Teslas Struggle at Railroad Crossings

2025-09-17
Newser
Why's our monitor labelling this an incident or hazard?
The Tesla FSD software is an AI system designed for autonomous driving assistance. The reported failures at railroad crossings have directly led to dangerous situations, including a collision with a train, which constitutes harm to persons and property. The AI system's inability to detect crossing arms, flashing lights, and trains is a malfunction during use, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in these events.
Thumbnail Image

NHTSA Is Monitoring Tesla FSD Issue With Train Crossings

2025-09-17
AutoSpies.com
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving software is an AI system controlling autonomous vehicle behavior. Reports of it failing to stop at railroad crossings represent a malfunction or failure in its use that could plausibly lead to harm (e.g., collisions with trains). Since the NHTSA is monitoring but has not yet opened an investigation and no actual harm is reported in the article, this event constitutes an AI Hazard rather than an AI Incident. The potential for injury or harm is credible and directly linked to the AI system's malfunction.
Thumbnail Image

Tesla drivers claim self-driving function fails at some railroad crossings

2025-09-17
The Independent
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system designed to autonomously navigate and control the vehicle. The reported failures to detect railroad crossing signals and barriers represent malfunctions of this AI system during its use. These malfunctions have directly led to hazardous situations that could cause injury or death, fulfilling the harm criteria (a) injury or harm to persons. The incidents are not hypothetical but have occurred multiple times, with drivers needing to intervene to prevent accidents. The involvement of a regulatory body further supports the seriousness of the issue. Hence, this event meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Owners Claim Elon Musk's 'Self-Driving' Software Fails to Detect Trains at Railroad Crossings

2025-09-18
Breitbart
Why's our monitor labelling this an incident or hazard?
The Tesla FSD software is an AI system designed to autonomously navigate driving environments. The reported failures to detect trains and respond to railroad crossing signals represent a malfunction of this AI system during its use. These malfunctions have directly led to dangerous situations that could cause injury or death, fulfilling the harm criteria for an AI Incident. The presence of multiple documented incidents, video evidence, and regulatory acknowledgment further supports this classification. The event is not merely a potential hazard or complementary information but a realized safety failure with direct risk to human health.
Thumbnail Image

Report: Tesla's 'Full Self-Driving' - May Not Be

2025-09-18
WJBC
Why's our monitor labelling this an incident or hazard?
The Tesla FSD software is an AI system involved in advanced driver-assistance with neural network-based decision-making. The reported failures to detect railroad crossings and the resulting near-accidents and a confirmed crash demonstrate direct harm to human safety caused by the AI system's malfunction. The involvement of the AI system in these incidents is explicit and central, fulfilling the criteria for an AI Incident due to injury or harm to persons. The recurring nature of the problem and regulatory attention further support this classification.
Thumbnail Image

Tesla Full Self-Driving Tech Has Trouble Identifying Railroad Crossings, Report Says

2025-09-19
La Voce di New York
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system as it uses an AI model trained on data to make driving decisions. The reported failures in recognizing railroad crossings and responding safely have directly led to dangerous situations, including a vehicle stuck on tracks in front of a train, which poses injury or harm to passengers. The collisions in the robotaxi pilot program further demonstrate malfunction or failure in AI operation causing harm or risk. The NHTSA investigation and consumer complaints confirm the harm is realized or ongoing. Thus, this event meets the criteria for an AI Incident because the AI system's malfunction and use have directly led to harm or significant risk to human health and safety.
Thumbnail Image

Self-Driving Teslas Keep Driving Into the Path of Oncoming Trains

2025-09-20
Futurism
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving software is an AI system designed to autonomously control vehicle navigation and driving decisions. The reported failures at railroad crossings, including failure to stop for trains and near collisions, demonstrate a malfunction of the AI system during its use. These malfunctions have directly endangered human safety, fulfilling the harm criteria (a) injury or harm to health. The repeated nature of these incidents and documented cases of actual collisions confirm that harm has occurred, not just potential harm. Hence, this event meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

特斯拉FSD被曝铁路道口识别失灵 美国监管介入调查

2025-09-18
中关村在线
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The reported incidents involve the AI system's failure to detect and respond appropriately to a critical hazard (railway crossing with an approaching train), which directly endangered human safety. The emergency braking by the driver prevented harm, but the AI system's malfunction was a direct contributing factor to the hazardous situation. Multiple user reports and video evidence support that this is a recurring issue, indicating a systemic defect. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the risk of injury or harm to persons.
Thumbnail Image

特斯拉FSD被指在铁路口无法识别火车 美监管机构已介入调查

2025-09-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The described incidents involve the AI system failing to detect a train at a railway crossing, which directly led to a near-accident situation posing a risk of injury or death. Multiple reports and video evidence confirm the malfunction during use. The regulatory investigation further supports the seriousness of the safety defect. Therefore, this event qualifies as an AI Incident due to the AI system's malfunction directly causing or nearly causing harm to people.
Thumbnail Image

特斯拉FSD被指在铁路口无法识别火车 美监管机构已介入调查

2025-09-17
证券之星
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous driving decisions. The failure to detect a train at a railway crossing directly endangered the driver's safety, constituting harm to a person. The involvement of the AI system's malfunction in this near-accident meets the criteria for an AI Incident, as the AI system's malfunction directly led to a significant safety risk and actual harm was narrowly avoided. The regulatory investigation further confirms the seriousness of the issue.
Thumbnail Image

特斯拉FSD在铁路道口频发故障遭投诉 美国国家公路交通安全管理局介入

2025-09-18
环球网
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The reported failures at railroad crossings represent malfunctions of this AI system that have directly endangered users' safety, fulfilling the criteria for harm to health. The involvement of a regulatory body and multiple user complaints confirm that harm has occurred or is ongoing. Hence, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

特斯拉FSD铁路道口"失灵":自动驾驶安全警钟再响

2025-09-17
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's FSD software is an AI system involved in autonomous driving decisions. The reported failures at railway crossings have directly caused or nearly caused accidents, including one collision with a train. This constitutes an AI Incident because the AI system's malfunction or insufficient performance has directly led to harm or risk of harm to persons and property. The involvement of regulatory scrutiny further supports the seriousness of the issue.
Thumbnail Image

特斯拉FSD系统被指在铁路道口存安全隐患,美国监管机构已介入

2025-09-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to assist driving by interpreting environmental inputs and making navigation decisions. The reported failures to detect railway crossing signals and trains represent malfunctions of this AI system during its use, which directly led to hazardous situations risking injury or death. Multiple user reports and video evidence confirm the recurring nature of this problem, and the regulatory agency's involvement underscores the seriousness of the safety risk. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the potential for harm to human health and safety.
Thumbnail Image

特斯拉FSD被指在铁路口无法识别火车 美监管机构已介入调查

2025-09-17
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to autonomously navigate driving environments. The reported failures to detect trains at railway crossings represent malfunctions of this AI system during its use, directly leading to potentially life-threatening situations (harm to persons). Multiple user reports and video evidence support the occurrence of these malfunctions. The regulatory investigation further confirms the recognition of these incidents as safety hazards. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the risk of injury or harm to people.
Thumbnail Image

特斯拉FSD被曝铁路道口识别失灵 美国监管介入调查

2025-09-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The reported failure to detect railway crossings with descending barriers and approaching trains is a malfunction of the AI system during its use. This malfunction directly led to hazardous situations that could have caused injury or death, fulfilling the harm criteria (a). Multiple user reports and video evidence confirm the issue is systemic rather than isolated. The regulatory investigation underscores the recognized risk and actual harm potential. Hence, this event meets the definition of an AI Incident.
Thumbnail Image

特斯拉FSD被指在铁路口无法识别火车 美监管机构已介入调查 - cnBeta.COM 移动版

2025-09-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed for autonomous driving. The failure to detect trains at railway crossings is a malfunction of this AI system during its use, which directly led to near-accidents and poses a serious risk of injury or death to vehicle occupants and others. The involvement of the regulatory body investigating potential safety defects confirms the seriousness of the issue. Therefore, this event qualifies as an AI Incident due to direct harm or risk of harm caused by the AI system's malfunction.
Thumbnail Image

特斯拉FSD被指在铁路口无法识别火车 美监管机构已介入调查 安全隐患引关注

2025-09-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to autonomously navigate driving environments. The reported failures to detect trains at railway crossings represent malfunctions in the AI system's perception and decision-making capabilities. These malfunctions have directly led to near-miss incidents that could have caused injury or death, fulfilling the criteria for an AI Incident. The regulatory investigation further confirms the seriousness and recognition of the harm potential. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction and the safety hazard posed to human life.
Thumbnail Image

特斯拉将FSD违规记录清除周期缩短至3.5天

2025-09-18
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's FSD and Autopilot systems are AI systems involved in autonomous driving and driver monitoring. The article focuses on changes to the violation record clearing period and monitoring thresholds, which are part of the AI system's use and development. While these changes could plausibly lead to safety risks or incidents in the future (e.g., reduced driver attentiveness leading to accidents), the article does not describe any actual harm or incident occurring as a result. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet been reported.
Thumbnail Image

Tesla виїхала під потяг: в автопілоті Tesla знайшли критичний недолік

2025-09-18
ТСН.ua
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full-Self Driving autopilot) whose malfunction (failure to recognize railway crossings and trains) directly caused a traffic accident. This meets the definition of an AI Incident because the AI system's malfunction has directly led to harm (collision with a train). The presence of multiple user complaints and confirmation by the NHTSA further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Автопілот Tesla має проблеми на залізничних переїздах - ЗМІ

2025-09-18
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full-Self Driving) whose malfunction in recognizing railroad crossings has directly caused or contributed to dangerous situations and at least one incident where a Tesla vehicle was struck by a train. The AI system's failure to act appropriately constitutes a malfunction leading to harm or risk of harm to people, which fits the definition of an AI Incident. The involvement of the NHTSA and reports of multiple occurrences further support this classification.
Thumbnail Image

Критична помилка автопілоту Tesla: авто потрапило під потяг - Авто bigmir)net

2025-09-18
www.bigmir.net
Why's our monitor labelling this an incident or hazard?
The Tesla Full-Self Driving system is an AI system involved in autonomous vehicle navigation. The described incidents involve the AI system's malfunction in failing to detect critical safety features at railroad crossings, which directly led to hazardous situations and near collisions. This constitutes an AI Incident because the AI system's malfunction has directly led to harm or risk of harm to persons (safety risks and near accidents). The involvement of the National Highway Traffic Safety Administration (NHTSA) and multiple user reports further confirm the significance of the issue.
Thumbnail Image

Автопілот Tesla не бачить залізничні переїзди -- це призвело до ДТП -- відео

2025-09-18
ФОКУС
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full-Self Driving autopilot) whose malfunction (failure to detect railroad crossings and signals) has directly led to a traffic accident involving a collision with a train. This constitutes harm to persons and property, fulfilling the criteria for an AI Incident. The involvement of the AI system is explicit, and the harm has materialized, not just potential.
Thumbnail Image

Водії скаржаться, що автопілот від Tesla не бачить залізничні переїзди

2025-09-18
uainfo.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) whose malfunction in recognizing railway crossing signals has directly caused dangerous incidents, including a collision with a train. This constitutes injury or harm to persons and harm to property, fulfilling the criteria for an AI Incident. The AI system's failure to act appropriately in these safety-critical scenarios is a direct cause of harm or near harm, not merely a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Спрямував електрокар під потяг: в автопілоті Tesla виявили небезпечний дефект (ВІДЕО)

2025-09-20
Новини України
Why's our monitor labelling this an incident or hazard?
The Tesla autopilot is an AI system involved in autonomous driving. The reported defect where it does not detect railroad crossings or signals has directly led to at least one collision with a train, which is a harm to the health and safety of people. Multiple complaints and confirmed incidents demonstrate realized harm. The malfunction of the AI system is the root cause of the incident, fulfilling the criteria for an AI Incident.
Thumbnail Image

US senators urge agency to probe Tesla Full Self-Driving response to railroad crossings

2025-09-29
Reuters
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed to autonomously navigate driving environments. The reported failures to detect and respond safely to railroad crossings have directly led to near-collisions, which constitute a direct or indirect risk of injury or harm to people. The senators' urging of a regulatory probe and the existing NHTSA investigation into collisions linked to FSD underlines the seriousness and materialization of harm or near-harm. Therefore, this event qualifies as an AI Incident because the AI system's malfunction or failure has directly or indirectly led to significant safety risks and near-harmful events.
Thumbnail Image

Two US senators urge probe of Tesla's Full Self-Driving response to rail crossings By Reuters

2025-09-29
Investing.com
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that autonomously controls vehicle navigation and driving tasks. The reported failures to detect railroad crossings and the resulting near-collisions and fatal crash demonstrate that the AI system's malfunction or misuse has directly or indirectly led to harm or risk of harm to people. The involvement of regulatory probes and calls for restrictions further confirm the seriousness of the issue. Since harm has occurred or is imminent due to the AI system's operation, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US Senators Urge Agency to Probe Tesla Full Self-Driving Response to Railroad Crossings

2025-09-29
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system involved in autonomous vehicle navigation. The reported failures to detect and respond safely to railroad crossings have already led to collisions and near-collisions, which constitute direct harm to persons. The involvement of the AI system's malfunction in these incidents meets the criteria for an AI Incident. The senators' urging of a regulatory probe further confirms the recognition of actual harm and risk caused by the AI system's failure.
Thumbnail Image

Senators Call for Investigation into Tesla's Self-Driving System | Law-Order

2025-09-29
Devdiscourse
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving. The reported failures at railroad crossings, including multiple near-misses and four accidents, demonstrate that the AI system's malfunction or inadequate performance has directly or indirectly caused safety risks and harm. The involvement of federal regulators and senators highlights the seriousness of the issue. Since harm has occurred or is ongoing due to the AI system's malfunction, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Senators Urge Investigation into Tesla's Self-Driving System Flaws | Business

2025-09-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed to autonomously navigate vehicles. The senators' call for investigation is based on documented failures of this system to detect railroad crossings, which has already resulted in multiple crash reports. The NHTSA's review and the senators' concerns highlight that the AI system's malfunction has directly or indirectly led to harm or significant risk of harm to human life. Therefore, this event qualifies as an AI Incident due to the AI system's malfunction causing or potentially causing injury or harm to people.
Thumbnail Image

Two US senators urge probe of Tesla's full self-driving response to rail crossings

2025-09-30
Free Malaysia Today
Why's our monitor labelling this an incident or hazard?
Tesla's full self-driving system is an AI system designed to autonomously operate vehicles. The reported failures to detect and respond safely to railroad crossings have already resulted in collisions, including a fatal crash, and multiple near-collisions. These incidents represent direct harm to human health and safety, fulfilling the criteria for an AI Incident. The involvement of regulatory authorities and calls for investigation further confirm the seriousness and realized harm associated with the AI system's malfunction or misuse. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

Senators Urge Probe Into Tesla Full Self-Driving Risks at Railroad Crossings - EconoTimes

2025-09-30
EconoTimes
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system performing autonomous driving tasks. The senators' concerns and NHTSA's investigations relate to actual or near-actual harms caused by the system's failure to detect trains at railroad crossings, which poses significant risks to human life. The event describes direct or indirect harm or near-harm caused by the AI system's malfunction or limitations, fulfilling the criteria for an AI Incident. The focus is on realized or imminent harm rather than potential future risk alone, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

Tesla's FSD tech targeted for risk of 'catastrophic' collisions by US senators - Cryptopolitan

2025-09-29
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that autonomously controls vehicle navigation, steering, and other driving functions. The reported failures to detect railroad crossings and the occurrence of near-collisions directly involve the AI system's malfunction or limitations, which could lead to injury or harm to people (harm category a). The senators' letter and the NHTSA investigations indicate that these issues are not hypothetical but have materialized or are ongoing, with credible risk of catastrophic harm. Therefore, this event qualifies as an AI Incident because the AI system's malfunction has directly or indirectly led to significant safety risks and near-harm events involving people.
Thumbnail Image

Two US senators urge probe of Tesla's full self-driving response to rail crossings

2025-09-30
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The Tesla Full Self-Driving system is an AI system involved in autonomous vehicle operation. The reported near-collisions at railroad crossings are directly linked to failures of this AI system to detect and respond appropriately, posing risks of catastrophic collisions. The senators' letter and the NHTSA's ongoing investigations confirm that these are not hypothetical risks but actual incidents or near-incidents. The potential for multi-fatality collisions constitutes harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but concerns realized or imminent harm due to AI system malfunction or misuse.
Thumbnail Image

Tesla FSD In Trouble Again: US Senators Urge Regulatory Probe Over Report Of Railroad Crossing Failure

2025-09-30
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The reported failures at railroad crossings and other safety lapses demonstrate malfunction or misuse of the AI system, which has already been linked to crashes and fatalities. The senators' warnings and regulatory scrutiny highlight the direct or indirect harm caused or likely to be caused by the AI system's malfunction. This fits the definition of an AI Incident, as the AI system's malfunction has directly or indirectly led to harm or significant risk thereof.
Thumbnail Image

Teslas' Habit Of Driving Onto Train Tracks Has Senators Demanding An Investigation, Like That's Bad Or Something - Jalopnik

2025-09-30
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed to assist driving. Its failure to detect and respond properly to railroad crossings has directly led to dangerous situations where Teslas drive onto train tracks, risking collisions with trains. Such collisions typically result in severe injury or death, constituting harm to people. The senators' call for investigation highlights the recognized risk and ongoing nature of the problem. Therefore, this event qualifies as an AI Incident due to the AI system's malfunction causing or posing direct harm.
Thumbnail Image

Tesla Influencers Attempt Coast-To-Coast FSD Trip. They Don't Make It Very Far. - Jalopnik

2025-09-30
Jalopnik
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system designed to perform automated driving tasks. The incident involved the AI system's malfunction in failing to avoid a large piece of metal on the highway, which caused the vehicle to become airborne and subsequently struggle. This malfunction directly led to a safety hazard and potential harm, fulfilling the criteria for an AI Incident under harm to persons and property. The presence of the AI system, its malfunction, and the resulting harm or risk thereof are clearly described.
Thumbnail Image

Senators Call for FSD Investigation as Telsa Tells Drowsy Drivers to Use It Behind the Wheel

2025-09-30
MotorTrend
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving assistance. The article details multiple crashes and safety failures directly linked to FSD's inability to detect hazards like railroad crossings and trains, which have led to accidents. The system's malfunction and misleading guidance to drowsy drivers increase the risk of injury. The involvement of government officials calling for investigation further confirms the seriousness of the harm. These factors meet the criteria for an AI Incident, as the AI system's malfunction has directly led to harm or risk of harm to people.
Thumbnail Image

Senators target Tesla FSD over rail crossing risks

2025-09-30
TheRegister.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed to assist with driving tasks autonomously. The reported failures to recognize railroad crossing signals and gates, leading to dangerous situations and requiring immediate human intervention, demonstrate a malfunction or limitation in the AI system's operation. The senators' concerns and documented incidents indicate that the AI system's use has directly or indirectly caused harm or posed serious risks to human safety, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but involves realized or ongoing harm risks linked to the AI system's deployment.
Thumbnail Image

Two US senators urge probe of Tesla's Full Self-Driving response to rail crossings

2025-09-30
Gulf Daily News Online
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system that autonomously controls vehicle operations. The article reports multiple incidents, including near-collisions and a fatal crash, linked to the system's failure to detect and respond properly to railroad crossings. The senators' call for investigation and regulatory action highlights the serious safety risks and actual harm caused or potentially caused by the AI system's malfunction or misuse. Since harm has occurred and the AI system's involvement is central, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Officials demand investigation into Tesla's self-driving technology following railroad crossing incidents: 'Disturbing safety risk'

2025-10-01
The Cool Down
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed to autonomously navigate driving environments. The article reports multiple incidents where this AI system failed to safely handle railroad crossings, causing drivers to take emergency actions to avoid accidents. This indicates a malfunction or failure in the AI system's operation leading to safety risks and potential harm to persons. The senators' formal request for investigation underscores the seriousness of these incidents. Hence, the event involves an AI system whose use has directly or indirectly led to safety risks and near-harmful incidents, meeting the criteria for an AI Incident.
Thumbnail Image

Experts raise red flags over 'alpha-level' Tesla driving feature: 'It should never be in the customer's hands'

2025-10-02
The Cool Down
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) whose use has directly led to safety hazards and potential harm to drivers and pedestrians, fulfilling the criteria for an AI Incident. The system's malfunction and inadequate performance during real-world driving tests have caused or could cause injury or harm to people. The article describes realized safety issues, not just potential risks, and regulatory investigations confirm the seriousness of the harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tesla Tells Sleepy Drivers to Switch to Its Self-Driving Mode That Needs to Be Monitored Constantly So It Doesn't Cause a Fatal Accident

2025-10-01
Futurism
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous driving. The article details how Tesla's use and promotion of this system to drowsy drivers encourages misuse and overreliance, which has already resulted in fatal accidents and injuries. The system's malfunction and misleading marketing have directly contributed to harm, including a $329 million damages payout for a fatal crash and federal investigations into crashes caused by FSD. These facts meet the criteria for an AI Incident because the AI system's use and malfunction have directly led to harm to persons and violations of legal obligations.
Thumbnail Image

Here's the Secret Weapon That Will Boost Tesla's EV Business | The Motley Fool

2025-10-02
The Motley Fool
Why's our monitor labelling this an incident or hazard?
The article centers on the prospective approval and use of Tesla's unsupervised FSD AI system, highlighting its potential to boost Tesla's EV sales and robotaxi business. There is no indication of realized harm or incidents resulting from the AI system's development, use, or malfunction. The discussion is about plausible future impacts and investment considerations rather than an actual event causing harm or a direct risk event. Therefore, this qualifies as Complementary Information, providing context and insight into the AI ecosystem and its potential future implications without describing an AI Incident or AI Hazard.
Thumbnail Image

Here's the Secret Weapon That Will Boost Tesla's EV Business

2025-10-02
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article centers on the anticipated future availability of Tesla's unsupervised FSD AI system and its potential to boost Tesla's EV business and robotaxi services. There is no mention of any actual harm, malfunction, or misuse of the AI system causing injury, rights violations, or other harms. The discussion is speculative about future developments and market impacts, which aligns with a plausible future risk or opportunity rather than an incident or current hazard. Therefore, this qualifies as Complementary Information, providing context and insight into AI system development and its potential implications without reporting an AI Incident or AI Hazard.
Thumbnail Image

Tesla full self-drive: Early Kiwi adopters wowed, but also suffer some bloopers

2025-10-02
NZ Herald
Why's our monitor labelling this an incident or hazard?
The Tesla FSD system is an AI system involved in autonomous driving. The article describes its use and some minor errors but no actual harm or injury has occurred in the reported cases. The mention of regulatory investigations and past fatalities is background information, not a new incident. The article focuses on user experiences, system behavior, and regulatory context, which fits the definition of Complementary Information rather than an AI Incident or Hazard. There is no direct or plausible imminent harm described in this article that would qualify it as an AI Incident or AI Hazard.
Thumbnail Image

What would you pay for a self-driving car? Tesla wants $149 a month in Australia - Switzer Daily

2025-10-02
Switzer Daily
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Tesla's Full Self-Driving software—which performs complex driving tasks and decision-making. The discussion of accidents and fatalities linked to the use of this AI system, as well as regulatory investigations and recalls, indicates that the AI system's use has directly or indirectly led to harm (injury or death). Therefore, this qualifies as an AI Incident. The range anxiety and charging infrastructure issues, while relevant to EV adoption, do not involve AI systems or AI-related harm and are not central to the classification. The article does not merely discuss potential risks or responses but reports on realized harms and ongoing investigations related to the AI system's use.
Thumbnail Image

Tesla owners file lawsuit alleging Elon Musk's company made misleading claims: 'Worth basically zero'

2025-10-03
The Cool Down
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Tesla's Full Self-Driving technology—which is an AI system designed to autonomously drive vehicles. The lawsuits allege that Tesla misrepresented the capabilities of this AI system, leading customers to pay for a feature that was not delivered as promised. This is a direct harm to consumers (financial harm and breach of trust), and the AI system's development and use are the root cause. The event is not merely a product announcement or general news but involves legal action due to harm caused by the AI system's failure to meet advertised capabilities. Hence, it meets the criteria for an AI Incident involving violations of consumer rights and misleading claims related to AI system performance.