Tesla's Autonomous Driving System Faces Safety Concerns Amid China Expansion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla plans to introduce its Full Self-Driving (FSD) system in China, aiming for regulatory approval and potential subscription sales. However, the FSD system has faced safety issues, including accidents and pedestrian detection failures, leading to NHTSA investigations. Despite these concerns, the latest FSD version has received positive feedback for improved performance.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article focuses on the upcoming introduction and commercialization of Tesla's FSD AI system in China. While the system is an AI system with potential safety and societal implications, the article does not report any realized harm or incidents caused by the AI system. Therefore, it does not qualify as an AI Incident. However, since the deployment of such an AI system could plausibly lead to future harms (e.g., accidents, safety issues), it fits the definition of an AI Hazard. The article does not discuss any specific harm or incident, only the planned launch and subscription pricing, so it is best classified as an AI Hazard due to the plausible future risk associated with deploying autonomous driving AI in a new market.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral publicBusiness

Harm types
Physical (injury)

Severity
AI hazard

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

台84學甲段「連環撞火燒車!」施工隊捲入數輛大車起火 目前1傷1命危

2024-05-30
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system that performs autonomous driving tasks using deep learning and sensor data. The article reports multiple accidents linked to the use and malfunction of this AI system, including failure to detect pedestrians and resulting collisions causing injury and critical conditions. This constitutes direct harm to persons caused by the AI system's malfunction or use, fitting the definition of an AI Incident. The mention of regulatory investigations and recalls further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

特斯拉全自动驾驶功能即将登陆中国!每月98美元 很便宜

2024-06-02
中关村在线
Why's our monitor labelling this an incident or hazard?
The article focuses on the upcoming introduction and commercialization of Tesla's FSD AI system in China. While the system is an AI system with potential safety and societal implications, the article does not report any realized harm or incidents caused by the AI system. Therefore, it does not qualify as an AI Incident. However, since the deployment of such an AI system could plausibly lead to future harms (e.g., accidents, safety issues), it fits the definition of an AI Hazard. The article does not discuss any specific harm or incident, only the planned launch and subscription pricing, so it is best classified as an AI Hazard due to the plausible future risk associated with deploying autonomous driving AI in a new market.
Thumbnail Image

消息称特斯拉正推动FSD在中国落地,计划今年推送并引入订阅制

2024-05-31
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) and its planned use in China, which is a significant development in AI deployment. However, there is no indication of any harm, malfunction, or misuse occurring at this time. The article focuses on the preparation and regulatory compliance for the AI system's launch, which is informative but does not describe an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context and updates about AI deployment without reporting harm or plausible imminent harm.
Thumbnail Image

消息称特斯拉正推动FSD在中国落地,计划今年推送并引入订阅制

2024-05-31
中关村在线
Why's our monitor labelling this an incident or hazard?
The article describes the planned introduction and regulatory filing of Tesla's FSD system in China, which is an AI system for autonomous driving. However, it does not report any actual harm, malfunction, or misuse resulting from the AI system. The content focuses on business plans, regulatory compliance, and market context, without any indication of direct or indirect harm or plausible imminent harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and updates on AI system deployment and regulatory status.
Thumbnail Image

路透:特斯拉準備向中國當局註冊FSD 目標年底前在當地上市 | 聯合新聞網

2024-05-30
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Tesla's FSD) that performs autonomous driving functions, which fits the definition of an AI system. However, the article describes preparations and plans for registration and future deployment, with no mention of any realized harm or incidents caused by the system. The potential risks of autonomous driving systems are well-known, but since no harm or malfunction has occurred yet, this event represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the plausible future harm from deploying an advanced autonomous driving AI system on public roads in China.
Thumbnail Image

完全自动驾驶发布在即!曝特斯拉正向相关部门备案FSD功能

2024-05-31
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD software) that is about to be deployed and tested on public roads. Although no harm has been reported yet, the nature of the system—full autonomous driving with generative AI—carries plausible risks of harm such as accidents or safety incidents. Therefore, this situation constitutes an AI Hazard, as the development and imminent use of this AI system could plausibly lead to an AI Incident in the future. There is no indication of actual harm or incident at this stage, so it is not an AI Incident. It is more than just complementary information because it concerns the imminent deployment and regulatory filing of a high-risk AI system.
Thumbnail Image

特斯拉FSD即将在中国落地?内部测试迹象显现

2024-05-29
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes the internal testing and regulatory progress of Tesla's FSD system in China but does not report any incident or harm caused by the AI system. There is no mention of injury, rights violations, property damage, or other harms, nor any credible risk of such harm materializing imminently. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information, as it provides contextual updates on AI system deployment and regulatory environment.
Thumbnail Image

據報特斯拉(TSLA.US)擬在內地推出先進全自動駕駛系統

2024-05-30
AAStocks.com
Why's our monitor labelling this an incident or hazard?
The event describes the planned deployment of an AI system (Tesla's full self-driving software) in China. However, the article does not mention any realized harm or incidents caused by the AI system, nor does it describe any direct or indirect harm resulting from its use or malfunction. The report focuses on the registration and upcoming launch, which is a development and deployment update without indication of harm or risk materializing yet. Therefore, this is best classified as an AI Hazard because the advanced autonomous driving system could plausibly lead to harm in the future, such as accidents or safety issues, but no such incident has occurred or is described in the article.
Thumbnail Image

據報Tesla準備向內地註冊全自動輔助駕駛軟件

2024-05-31
AAStocks.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (Tesla's FSD software) but does not describe any realized harm or incident resulting from its use or malfunction. The article focuses on the registration and upcoming deployment plans, which could plausibly lead to future harm if issues arise, but no such harm is reported or implied as having occurred yet. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and market strategy without reporting an AI Incident or Hazard.
Thumbnail Image

【图】知情人士:特斯拉正向相关部门备案FSD_汽车之家

2024-05-31
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The article describes the development and planned deployment of Tesla's AI-powered FSD system, which qualifies as an AI system. There is no indication of any harm, malfunction, or misuse occurring or having occurred. The filing and preparation for deployment represent ongoing development and regulatory steps, with no direct or indirect harm reported. Therefore, this is not an AI Incident or AI Hazard. The article provides contextual information about the AI system's status and technological aspects, which fits the definition of Complementary Information.
Thumbnail Image

路透:特斯拉計劃在中國推出先進FSD系統 | Anue鉅亨 - 美股雷達

2024-05-30
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's advanced AI-driven FSD system, which is an AI system by definition. The system is not yet deployed to users but is in preparation for registration and testing on public roads in China. No actual harm or incidents are reported, so it does not qualify as an AI Incident. However, the deployment of such an autonomous driving system inherently carries plausible risks of harm (e.g., accidents, injuries) due to AI malfunction or misuse. Thus, it fits the definition of an AI Hazard, as the event could plausibly lead to an AI Incident in the future. The article does not focus on responses, updates, or broader governance, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

FSD即将进入中国?特斯拉客服:确实在筹备这件事,具体落地时间没有官方消息-科技频道-和讯网

2024-05-31
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) that is in preparation for deployment and testing in China. While the system's use could plausibly lead to AI-related incidents (e.g., accidents or safety issues from autonomous driving), the article does not report any actual harm or incidents occurring yet. The information is about planned development and potential future use, with no realized harm or malfunction described. Therefore, this qualifies as an AI Hazard, as the deployment and testing of an autonomous driving AI system could plausibly lead to incidents in the future, but no incident has yet occurred.
Thumbnail Image

特斯拉计划在中国注册"全自动驾驶(FSD)"软件-科技频道-和讯网

2024-05-31
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (Tesla's FSD) for autonomous driving. While no harm or incident is reported yet, the registration and testing on public roads imply a plausible risk of future harm (e.g., accidents or safety issues) associated with the AI system's operation. Therefore, this situation qualifies as an AI Hazard because it could plausibly lead to an AI Incident if the system malfunctions or causes harm during use. There is no indication of realized harm or incident at this stage, nor is the article primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information.
Thumbnail Image

傳特斯拉計畫向中國註冊其全自動輔助駕駛系統-MoneyDJ理財網

2024-05-31
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned deployment of an AI system (Tesla's Full-Self Driving system) that performs autonomous driving tasks, which clearly qualifies as an AI system. However, the article only discusses the registration and planned rollout of the system, with no mention of any harm or incidents caused by the system so far. Since no harm has occurred yet, but the system's deployment could plausibly lead to future AI-related incidents (e.g., accidents or safety issues), this qualifies as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it concerns an AI system with potential safety implications.
Thumbnail Image

快讯:据外媒报道,有知情人士表示,特斯拉已成功获得中国工业和信息化部的软件注册,这将为特斯拉内部测试全自动驾驶 (FSD) 铺平道路,特斯拉员工将在中国公共道路上测试,并计划在未来几个月内升级推送给中国用户。针对此事,特斯拉客服表示,目前内部员工没有进行相关的测试。不同城市不太一样,即使未来能够落地或者开放,也是在允许完全自动驾驶能力测试的城市开放。"我们确实在筹备这件事,但具体什么时间落地,也需要一个长久的时间,目前官方没有任何相关消息。用户可以关注官方公众号和官方微博,如果有最新的进展或者待发布的时间,我们会第一时间通知给所有的车主。"(新浪科技)

2024-05-31
华尔街见闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving system) and its planned use (internal testing on public roads). However, since no testing or deployment has yet taken place, and no harm or incident has been reported, this situation represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the development and potential use of the AI system could plausibly lead to harm in the future, but no harm has yet occurred.
Thumbnail Image

快讯:特斯拉似乎正进一步推进其 FSD 系统在中国的落地进程。据悉,在最近的一次软件更新后,有特斯拉中国员工的车载系统中出现了相关字样。近日,一位特斯拉中国员工证实,其车辆的软件更新后,屏幕上出现了"员工 FSD Beta 版计划:已注册"的字样,该员工通过截图向特斯拉粉丝 Chris Zheng 分享了这一信息。需要指出的是,虽然显示已注册,但该员工的车辆目前尚无法启用任何FSD功能。(IT之家)

2024-05-29
华尔街见闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD, an autonomous driving AI system) and its development and deployment progress. However, there is no indication of any harm occurring or plausible harm that could arise imminently from this update. The system is not yet enabled, and no incident or hazard is described. Therefore, this is a general update about AI system deployment progress without any realized or potential harm, fitting the category of Complementary Information.
Thumbnail Image

消息称特斯拉向相关部门备案FSD 计划年内推送并引入订阅制

2024-05-31
China Finance Online
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system for autonomous driving. The article discusses its planned regulatory filing and launch in China, which is a development and use phase. No direct or indirect harm has been reported yet. The potential for future harm exists given the nature of autonomous driving AI, but this is a plausible risk rather than a realized incident. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident in the future but no harm has yet materialized.
Thumbnail Image

智通财经APP获悉,有报道称,特斯拉(TSLA.US)即将在中国注册其全自动驾驶软件(FSD)。据知情人士透露,如果特斯拉成功向中国工业和信息化部注册FSD软件,特斯拉员工可以在中国的公共道路上进行FSD的内部测试。报道称......

2024-05-30
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Tesla's Full Self-Driving software) and discusses its imminent registration and testing in China. There is no mention of any harm or malfunction caused by the AI system so far, only plans and initial internal testing. Autonomous driving AI systems inherently carry risks of injury or harm to people and disruption to infrastructure if they malfunction or are improperly used. Since the article describes preparations for deployment and testing that could plausibly lead to harm in the future, it fits the definition of an AI Hazard. It is not Complementary Information because the main focus is not on responses or updates to past incidents, nor is it unrelated as it clearly involves an AI system with potential safety implications.
Thumbnail Image

完全自动驾驶发布在即 曝特斯拉正向相关部门备案FSD功能 - cnBeta.COM 移动版

2024-05-31
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Tesla's FSD is an AI system designed for autonomous vehicle operation, involving generative AI for path prediction and driving decisions. The article discusses ongoing testing and regulatory approval, indicating active use and development. Although no specific harm or incident is reported, the deployment of such AI systems on public roads carries plausible risks of harm (e.g., accidents, injury) if the system malfunctions or misinterprets situations. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm, but no actual harm or incident is described yet.
Thumbnail Image

传特斯拉FSD软件即将在中国注册并进行内部测试 - Tesla 特斯拉电动汽车 - cnBeta.COM

2024-05-30
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's FSD software, which is an AI system for autonomous driving. The event concerns the registration and internal testing of this AI system in China, with plans for customer deployment. No actual harm or incident is reported; the software is not yet activated for public use, and no accidents or rights violations are described. However, autonomous driving AI systems inherently carry plausible risks of causing harm in the future, such as traffic accidents or safety incidents. Thus, the event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident once the system is in active use on public roads.
Thumbnail Image

特斯拉计划在中国推出最新自动驾驶系统

2024-05-30
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system—Tesla's Full Self-Driving software—which is an advanced autonomous driving AI system. The event concerns the planned deployment and testing of this system on public roads in China, which could plausibly lead to incidents involving injury, property damage, or other harms if the system malfunctions or behaves unexpectedly. No actual harm or incident is reported yet, so it does not qualify as an AI Incident. The article is not primarily about responses, governance, or updates to past incidents, so it is not Complementary Information. It is not unrelated as it directly concerns an AI system with potential safety implications. Hence, the classification is AI Hazard.
Thumbnail Image

Is China Getting Full Self-Driving Cars? Tesla Seeking Final Go Ahead From Government By Benzinga

2024-05-30
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The article discusses Tesla's efforts to launch its FSD AI system in China under regulatory oversight. While the FSD system is an AI system with potential safety implications, the article does not describe any actual harm, malfunction, or misuse occurring at this stage. The focus is on the approval process and planned deployment, which could plausibly lead to future harm if issues arise, but no such harm is reported. Therefore, this is best classified as Complementary Information providing context on AI deployment and regulatory engagement rather than an AI Incident or Hazard.
Thumbnail Image

Tesla makes push to roll out its advanced Full Self-Driving technology in China

2024-05-31
https://auto.hindustantimes.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Tesla's Full Self-Driving software) that is designed to operate vehicles autonomously, which fits the definition of an AI system. However, there is no indication that the AI system has caused any injury, disruption, rights violations, or other harms yet. The article describes plans and preparations for rollout and testing, which could plausibly lead to harm in the future given the risks associated with autonomous driving, but no harm has materialized or been reported. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk of harm from the deployment of this advanced AI system on public roads in China.
Thumbnail Image

Exclusive-Tesla Makes Push to Roll Out Advanced FSD Self-Driving in China

2024-05-30
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (Tesla's FSD) that performs autonomous driving tasks. However, there is no indication that any harm has occurred or that there is a plausible risk of harm at this stage. The article describes a planned rollout and testing phase, which is a development and deployment activity without reported incidents or hazards. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and regulatory progress rather than describing an incident or hazard.
Thumbnail Image

Elon Musk's Tesla Works To Release 'Full Self-Driving' Tech Mode in China

2024-05-31
Science Times
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving mode is an AI system designed to enable semi-autonomous vehicle operation. The article details multiple past incidents where the FSD system's failures have caused car crashes and fatalities, which constitute injury or harm to people. The company's push to release this technology in China despite these safety concerns indicates ongoing use of an AI system with a poor safety record. The harms described are direct and materialized, not hypothetical or potential. Hence, this event qualifies as an AI Incident due to the AI system's malfunction and use leading to injury and fatalities.
Thumbnail Image

Tesla Said to Seek FSD Registration for Rollout in China This Year-钛媒体官方网站

2024-06-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving system is an AI system designed for autonomous driving. The article discusses Tesla's ongoing efforts to register and test this system in China, including regulatory approvals and partnerships necessary for its operation. However, there is no mention of any accidents, malfunctions, or harms caused by the FSD system so far. The article mainly covers preparatory and regulatory steps, which implies a potential for future harm once the system is widely deployed. According to the framework, this constitutes an AI Hazard because the AI system's use could plausibly lead to harm (e.g., traffic accidents or safety issues) but no incident has yet occurred.
Thumbnail Image

Tesla (TSLA) to Register FSD Software With Authorities in China

2024-05-31
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article describes Tesla's plan to register and deploy its AI-powered FSD system in China, which involves the development and use of an AI system. However, there is no mention of any injury, rights violations, property damage, or other harms caused by the AI system at this stage. The event is about preparation and regulatory compliance, with no indication of realized or imminent harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI system deployment and regulatory engagement, which are relevant to understanding the AI ecosystem and its evolution.
Thumbnail Image

Tesla is preparing to get Full Self-Driving package approved in China

2024-05-30
Electrek
Why's our monitor labelling this an incident or hazard?
The article describes Tesla's ongoing efforts to register and test its AI-based FSD system in China. While the system involves AI and autonomous driving capabilities, there is no indication that any harm, malfunction, or violation has occurred. The event concerns the potential future deployment of an AI system that could plausibly lead to harm if issues arise, but at this stage, it is a regulatory and preparatory process without realized harm. Therefore, it fits the definition of Complementary Information, providing context on AI system deployment and regulatory progress rather than reporting an incident or hazard.
Thumbnail Image

Tesla Makes Moves to Roll Out Its Controversial "Full Self-Driving" Tech in China

2024-05-30
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's Full Self-Driving software, which is an AI system for autonomous driving. It mentions the system's terrible track record, including multiple crashes and deaths in the US, which are direct harms linked to the AI system's use. The rollout in China is planned but not yet realized, so no new harm has occurred there yet. However, given the known safety issues and history, there is a credible risk that deploying FSD in China could lead to similar harms. Thus, this event is best classified as an AI Hazard because it involves the plausible future risk of injury or death due to the AI system's deployment, rather than a new AI Incident occurring in China at this time.
Thumbnail Image

Huawei Chairman Confident in Beating Tesla's FSD with Intelligent Driving in China - EconoTimes

2024-06-02
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving software, which fits the definition of AI systems. However, there is no indication that any harm has occurred due to the development, use, or malfunction of these systems. The discussion centers on competition and future market entry, which could plausibly lead to future AI-related incidents but does not describe any current incident or hazard. Therefore, this is best classified as Complementary Information, providing context on AI ecosystem developments and competitive dynamics without reporting an incident or hazard.
Thumbnail Image

Tesla looks to test FSD software in China with govt approval: report · TechNode

2024-05-31
TechNode
Why's our monitor labelling this an incident or hazard?
The article discusses Tesla's intention to test and roll out its AI-based FSD software in China pending government approval. While the FSD system is an AI system, the article does not describe any realized harm, malfunction, or misuse related to the AI system. It is primarily about regulatory approval and future deployment plans, which do not constitute an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and regulatory interaction without reporting harm or plausible harm.
Thumbnail Image

Tesla reportedly to register FSD software in China

2024-05-31
DIGITIMES
Why's our monitor labelling this an incident or hazard?
Tesla's FSD software is an AI system designed for autonomous driving. The article discusses the registration and planned launch of this AI system in China, which could plausibly lead to future harms related to autonomous vehicle operation (e.g., accidents, safety issues). However, no actual harm or incident is reported at this stage. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred or been reported.
Thumbnail Image

Tesla to Offer Full Self-Driving Software as Subscription in China

2024-05-30
quiverquant.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and planned deployment of Tesla's AI-powered FSD system, which is an AI system by definition. However, there is no indication that any harm has occurred or that the system has malfunctioned or caused injury, rights violations, or other harms. The event is about the introduction and regulatory registration of the AI system and its potential market impact, but no realized or imminent harm is described. Therefore, this is not an AI Incident or AI Hazard. It is a general AI-related development and market update, which fits best as Complementary Information, providing context on AI ecosystem evolution and deployment plans.