Tesla to Upgrade FSD Hardware and Launch Unsupervised Driving Service in 2025

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk said Tesla must replace older HW3 computers with new HW4 chips to fulfill its Full Self-Driving promises, admitting past recalls over camera failures. He also announced a no-human-supervision FSD service will debut in Austin in June 2025, expanding across North America by 2027, raising safety and oversight concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI and vision systems to operate fully autonomous taxis without human supervision. While no harm has yet occurred, the deployment of unsupervised autonomous vehicles on public roads carries a plausible risk of accidents or other harms to people or property. Since the event concerns the planned launch and not an incident with realized harm, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's involvement is clear, and the potential for harm is credible and foreseeable, meeting the criteria for an AI Hazard.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareIT infrastructure and hosting

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Physical (death)Economic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality controlMaintenanceCompliance and justice

AI system task:
Recognition/object detectionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

马斯克:特斯拉将于 6 月在美国得州推出无人监督的无人驾驶出租车服务

2025-01-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and vision systems to operate fully autonomous taxis without human supervision. While no harm has yet occurred, the deployment of unsupervised autonomous vehicles on public roads carries a plausible risk of accidents or other harms to people or property. Since the event concerns the planned launch and not an incident with realized harm, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's involvement is clear, and the potential for harm is credible and foreseeable, meeting the criteria for an AI Hazard.
Thumbnail Image

带方向盘版特斯拉CyberCab曝光,引发网友热议:Model Q来了?

2025-02-01
中关村在线
Why's our monitor labelling this an incident or hazard?
The Tesla CyberCab is an AI system designed for autonomous driving. The article discusses a test vehicle with a steering wheel, likely for safety during testing, which is a normal part of AI system development and deployment. There is no mention of any injury, rights violation, property damage, or other harm caused by the AI system. The speculation about a new model is not linked to any harm or risk. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information since it provides context and updates about the development and testing of an AI system without reporting harm or risk of harm.
Thumbnail Image

无需监管!马斯克宣布在美推出全新FSD付费服务版本

2025-01-30
驱动之家
Why's our monitor labelling this an incident or hazard?
The announcement involves an AI system (Tesla's FSD) that will operate without human supervision, which is a significant development in AI use. While no incident or harm has been reported, the deployment of unsupervised autonomous driving systems plausibly could lead to injury or harm to people or property. Therefore, this event fits the definition of an AI Hazard, as it describes a credible future risk stemming from the AI system's use. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

马斯克公布特斯拉FSD入华难点:错综复杂的公交车道

2025-01-30
驱动之家
Why's our monitor labelling this an incident or hazard?
The article centers on the development and deployment challenges of an AI system (Tesla's FSD) in a specific market (China). It does not describe any realized harm or incident caused by the AI system, nor does it report a near miss or plausible future harm event beyond general regulatory and operational challenges. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It provides contextual information about AI system deployment challenges and regulatory environment, which fits the definition of Complementary Information.
Thumbnail Image

《超级开箱》年度回顾:2024智能驾驶大事记

2025-02-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article mentions AI systems implicitly through intelligent driving and autonomous parking features, which involve AI technologies. However, it does not describe any event where the AI systems caused or could plausibly cause harm, nor does it report any incident or hazard related to AI malfunction or misuse. The content is primarily an informative review and outlook on the industry progress and competition, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem without reporting new incidents or hazards.
Thumbnail Image

马斯克公布特斯拉FSD入华难点:错综复杂的公交车道

2025-01-30
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's FSD, an AI system for autonomous driving, and the challenges in training it for Chinese traffic conditions due to data and regulatory constraints. Although no direct harm or incident is reported, the complexity of bus lane rules and the limitations on training data could plausibly lead to AI misbehavior causing traffic violations or safety issues. Since the harm is potential and not yet realized, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

马斯克谈特斯拉FSD入华最大难点之一"公交车道":只能从网上找视频进行模拟训练

2025-01-30
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and training challenges of Tesla's FSD system in China, specifically the difficulty of simulating complex bus lane rules using online videos due to data transfer restrictions. There is no mention of any harm caused by the AI system, nor any direct or indirect incident resulting from its use or malfunction. The discussion is about ongoing development and regulatory hurdles, which does not meet the criteria for AI Incident or AI Hazard. It is not a routine product launch either, as it provides insight into challenges faced, but since no harm or plausible harm is described, it is best classified as Complementary Information.
Thumbnail Image

新车自己开出工厂?马斯克:特斯拉 FSD 进入无监管时代

2025-01-30
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) actively used in an operational setting (factory vehicle movement) without human supervision, which fits the definition of an AI system and its use. However, the article does not describe any actual harm or incident caused by the AI system. Instead, it discusses the deployment as a technological milestone and the potential benefits and challenges ahead. Since no harm has occurred but the use of AI in this way could plausibly lead to harm (e.g., accidents or operational failures in the future), the event qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because the article's main focus is on the new application of FSD and its implications, not on updates or responses to a prior incident. It is not Unrelated because the AI system is central to the event.
Thumbnail Image

Tesla業績遜預期 擬6月推毋需人類監督版FSD 快生產新車型 股價盤後升逾4%

2025-01-30
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
Tesla's FSD system is an AI system designed for autonomous vehicle operation. The article reports plans to launch a version that requires no human supervision, which could plausibly lead to harm such as accidents or injuries if the system malfunctions or fails to perform safely. However, the article does not describe any actual incidents or harms caused by the AI system so far. The focus is on future deployment and potential risks, not realized harm. Hence, this is best classified as an AI Hazard due to the plausible future harm from the AI system's use in fully autonomous driving.
Thumbnail Image

马斯克:将于6月在美国奥斯汀推出无监管FSD付费服务

2025-01-30
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Tesla's Full Self-Driving technology) that will operate without human supervision, which directly relates to AI system use. While the announcement itself does not report any harm or accident caused by the system, the introduction of an unsupervised autonomous driving service carries a plausible risk of future harm (e.g., accidents or injuries) due to AI system malfunction or misuse. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm, even though no incident has yet occurred.
Thumbnail Image

特斯拉将迎史诗升级:推出全新FSD付费服务版本

2025-01-30
smartcar.cnmo.com
Why's our monitor labelling this an incident or hazard?
The event describes the development and planned deployment of an AI system (Tesla's FSD) that operates without human supervision, which is a significant AI system. While no harm is reported yet, the encouragement to test the system in real urban environments without human oversight implies a credible risk of accidents or injuries. The regulatory approval is still pending in some markets, indicating that the system's safety is not fully established. Hence, this situation fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm in the future.
Thumbnail Image

无需监管!马斯克宣布在美推出全新FSD付费服务版本

2025-01-30
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's Full Self-Driving system) that will operate without human supervision, which is a significant development in autonomous vehicle technology. While no harm has yet occurred, the deployment of unsupervised autonomous driving systems plausibly could lead to incidents causing injury or harm to people, property damage, or other significant harms if the system malfunctions or fails to respond appropriately. The announcement of launching such a system without regulatory oversight and encouraging consumer use in real-world urban settings indicates a credible risk of future harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as no direct harm is reported yet but plausible future harm exists.
Thumbnail Image

马斯克公布特斯拉FSD入华难点:错综复杂的公交车道

2025-01-30
证券之星
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and deployment challenges of an AI system (Tesla's FSD) in a specific regulatory and operational environment. It does not report any realized harm or incident caused by the AI system, nor does it describe a plausible immediate harm event. Instead, it discusses potential future difficulties and regulatory constraints, which aligns with the notion of an AI Hazard. However, since no direct or indirect harm has occurred or is imminent, and the main focus is on challenges and regulatory context rather than a specific risk event, the classification is best as Complementary Information, providing context and updates on AI deployment challenges and governance.
Thumbnail Image

特斯拉将在2025年推出无人监督的全自动驾驶服务!

2025-01-31
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Tesla's full self-driving service is an AI system that will operate vehicles without human supervision, which inherently carries risks of accidents or harm if the AI malfunctions or makes incorrect decisions. Although the current safety data shows better performance than average driving, the introduction of unsupervised full autonomy is a significant step that could plausibly lead to incidents involving injury or harm. Since no actual harm is reported yet, but the potential for harm is credible and foreseeable, this event is best classified as an AI Hazard.
Thumbnail Image

马斯克:特斯拉将于 6 月在美国得州推出无人监督的无人驾驶出租车服务

2025-01-30
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service will use AI systems for fully autonomous driving without human supervision, which is explicitly stated. The event concerns the planned use of this AI system, not a realized harm. However, unsupervised autonomous vehicles have inherent risks that could plausibly lead to injury, accidents, or other harms. Since no incident has yet occurred but the potential for harm is credible and foreseeable, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a product launch without risk, as the autonomous operation without supervision is a significant factor for plausible future harm.
Thumbnail Image

全球汽车品牌价值,丰田还是第一!

2025-02-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Tesla's Full Self-Driving, FSD) that is currently in limited use and planned for broader deployment without human supervision. Although no harm or accident is reported, the nature of the system—fully autonomous driving without human intervention—carries credible risks of causing injury or accidents in the future. The article's mention of safety statistics does not negate the potential for future harm once the system is widely deployed unsupervised. Hence, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm from the AI system's use.
Thumbnail Image

全球汽车品牌价值,丰田还是第一!

2025-02-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Tesla's FSD autonomous driving system) in its development and use. However, it does not report any realized harm or incident caused by the AI system, nor does it indicate a plausible risk of harm occurring imminently. The discussion of safety statistics suggests the system is currently safer than average human driving, and the challenges mentioned are regulatory and market acceptance rather than safety hazards. The political and economic context around tax incentives and regulations is complementary information but not an AI harm. Thus, the article is best classified as Complementary Information, providing context and updates on AI system deployment and related governance and market factors without describing an AI Incident or AI Hazard.
Thumbnail Image

带方向盘版特斯拉CyberCab曝光,引发网友热议:Model Q来了?

2025-02-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (autonomous driving AI in the Tesla CyberCab) in development and testing. The addition of a steering wheel suggests a safety measure for human override during testing, which is a normal part of AI system development and deployment. There is no indication of any harm, malfunction, or violation caused by the AI system, nor any plausible future harm explicitly stated. The event is primarily about the AI system's development status and public reaction, without any incident or hazard occurring or being credibly anticipated. Therefore, it is best classified as Complementary Information, providing context and updates on AI system development and public discourse.
Thumbnail Image

特斯拉将于 6 月在美国推出无人监督的无人驾驶出租车服务

2025-01-30
新浪财经
Why's our monitor labelling this an incident or hazard?
Tesla's CyberCab is an AI system for autonomous driving without human supervision, which inherently carries risks of causing harm if it malfunctions or fails. Since the service is planned for launch in June and no harm has yet occurred, the event represents a plausible future risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

特斯拉中国市场交付创新纪录,积极推进FSD今年进入欧洲和中国

2025-01-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Tesla's intelligent driver-assistance system and humanoid robots involve AI systems. However, the article focuses on achievements, production plans, and market expansion without reporting any direct or indirect harm caused by these AI systems. There is no indication of malfunction, misuse, or harm to people, infrastructure, rights, property, or communities. The potential for future harm is not explicitly discussed or implied as a credible risk in this context. Therefore, the event is best classified as Complementary Information, providing context and updates on AI system deployment and development without constituting an AI Incident or AI Hazard.
Thumbnail Image

特斯拉电话会:今年将是历史最重要一年 FSD重大突破 机器人业务未来可能超过汽车

2025-01-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's AI-based fully autonomous driving system and its imminent deployment, which qualifies as an AI system. However, it does not describe any harm or incident caused by the AI system, nor does it report any near misses or credible warnings of imminent harm. The focus is on progress, safety improvements, regulatory environment, and future expectations. This fits the definition of Complementary Information, as it provides supporting data and context about AI system development and deployment without introducing a new AI Incident or AI Hazard. The mention of safety statistics and regulatory caution further supports this classification as updates and governance context rather than harm or hazard.
Thumbnail Image

特斯拉利润降53%,FSD授权给上汽?马斯克回应

2025-01-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes Tesla's AI system (unsupervised FSD) being developed and soon deployed for autonomous driving, which fits the definition of an AI system. There is no mention of any actual harm or incidents caused by the AI system so far, so it is not an AI Incident. However, the deployment of Level 4 autonomous driving technology without human supervision plausibly could lead to harm such as accidents or injuries, making it an AI Hazard. The discussion of licensing FSD to other automakers also implies potential future risks. The article mainly focuses on the development, deployment plans, and business context, without reporting any realized harm or legal/governance responses, so it is not Complementary Information. It is clearly related to AI systems and their potential impacts, so it is not Unrelated.
Thumbnail Image

特斯拉2024Q4业绩电话会议分析师问答

2025-01-31
新浪财经
Why's our monitor labelling this an incident or hazard?
The transcript explicitly discusses AI systems (Tesla's FSD and Optimus robot) and their development and deployment plans. However, it does not report any actual harm or incident caused by these AI systems. The focus is on progress, safety improvements, regulatory challenges, and future expectations. This fits the definition of Complementary Information, as it provides supporting data and context about AI systems and their ecosystem without describing a new AI Incident or AI Hazard. There is no indication of realized harm or plausible imminent harm in the transcript.
Thumbnail Image

埃隆·马斯克承认特斯拉将需要新硬件来实现FSD - cnBeta.COM 移动版

2025-01-31
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (Tesla's autonomous driving AI) and discusses hardware upgrades and software development challenges. It mentions recalls due to technical issues, which relate to AI system malfunctions. However, there is no mention of actual harm such as accidents, injuries, or rights violations caused by the AI system. The focus is on the company's acknowledgment of hardware limitations and ongoing development efforts. This fits the definition of Complementary Information, as it updates on the AI ecosystem and responses to prior issues without reporting a new incident or hazard.
Thumbnail Image

马斯克提及特斯拉FSD入华难点:错综复杂的公交车道 - cnBeta.COM 移动版

2025-01-30
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article centers on the development and deployment challenges of an AI system (Tesla's FSD) in a specific market (China). It does not describe any realized harm or incident caused by the AI system, nor does it report a near miss or plausible future harm beyond general regulatory and operational challenges. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not a routine product launch but rather provides contextual information about the AI system's deployment challenges, fitting the definition of Complementary Information.
Thumbnail Image

马斯克承认现有的特斯拉无法实现完全自动驾驶

2025-02-01
煎蛋
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Tesla's AI system for autonomous driving (FSD) and its hardware components. It acknowledges that the current AI system (HW3) is insufficient for safe, fully unsupervised driving, which has led to multiple safety incidents and collisions. These are harms to the health and safety of persons, fulfilling the criteria for an AI Incident. The CEO's admission and the discussion of hardware/software limitations confirm the AI system's role in these harms. Although the article focuses on the admission and future hardware upgrades, the mention of past incidents and ongoing safety challenges indicates realized harm rather than just potential risk, ruling out AI Hazard or Complementary Information. Thus, the classification is AI Incident.
Thumbnail Image

马斯克:今年将是特斯拉"关键一年

2025-02-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (autonomous driving) that could plausibly lead to harm if failures or malfunctions occur, especially given the scale of deployment planned. Since no actual harm or incident is reported, and the focus is on future plans and safety considerations, this fits the definition of an AI Hazard rather than an AI Incident. The mention of the 'Optimus' robot also suggests future AI-enabled products but without current harm. Thus, the classification is AI Hazard.
Thumbnail Image

特斯拉财报炸了!股价过山车,FSD成最大亮点,马斯克又放大招

2025-01-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's FSD, an AI system for fully autonomous driving, which is planned to be launched soon. Although no harm or incident has yet occurred, the deployment of fully autonomous driving AI systems without human supervision carries credible risks of causing injury, harm to people, or disruption. Since the article focuses on the announcement and future plans rather than any realized harm, this qualifies as an AI Hazard rather than an AI Incident. Other parts of the article about financial performance and stock price are unrelated to AI harms. Hence, the classification is AI Hazard.
Thumbnail Image

Autonoom rijdende auto's moeten Tesla nieuw leven inblazen

2025-01-30
De Tijd
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically the AI driving software enabling Tesla's fully autonomous cars. Although no actual harm or incident has occurred yet, the deployment of such AI systems in public roads could plausibly lead to harms such as injury to people or disruption of infrastructure if the systems malfunction or fail. The mention of regulatory challenges and safety priorities further supports the recognition of potential risks. Hence, this event fits the definition of an AI Hazard, as it describes a credible scenario where AI use could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

Tesla belooft kleinere modellen en robotaxi's in 2025

2025-01-31
VROOM.be
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of Tesla's autonomous driving technology and FSD hardware, but it does not describe any realized harm or incidents resulting from AI system malfunction or misuse. The promises of robotaxi deployment and hardware replacements are future-oriented and do not indicate current or past harm. The hardware limitation and potential recall are technical and customer service issues, not AI incidents causing harm. Therefore, this is complementary information providing context and updates on AI system development and deployment plans, without reporting an AI Incident or AI Hazard.
Thumbnail Image

Tesla geeft fout toe: HW3 niet geschikt voor autonoom rijden

2025-01-30
TechPulse
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Tesla's Full Self-Driving AI hardware and software. The malfunction or inadequacy of the HW3 AI hardware directly leads to harm by failing to deliver promised autonomous driving capabilities, which affects consumers who purchased the FSD package under false pretenses. This constitutes a violation of consumer rights and expectations, a form of harm under the framework. The involvement of AI in autonomous driving and the direct consequences of hardware insufficiency meet the criteria for an AI Incident. The prior legal ruling further supports the recognition of harm caused by the AI system's development and use.
Thumbnail Image

Tesla lanceert betaalbare EV's en autonome rijservice in 2025

2025-01-31
Business AM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's Full Self-Driving software, an AI system, being tested without supervision and the launch of an autonomous ride service. While the company stresses safety, the deployment of unsupervised autonomous driving AI inherently carries plausible risks of harm to passengers or the public. No actual harm or incident is described, so it does not qualify as an AI Incident. The article is not merely general AI news or product launch without risk, as it highlights the imminent unsupervised testing and service launch, which could plausibly lead to harm. Hence, the classification is AI Hazard.