Tesla Begins Internal AI-Powered Robotaxi Trials in the US

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Tesla is testing its robotaxi service in Austin and San Francisco using AI-driven fully autonomous systems, with 1,500 rides logged so far. The service, monitored by Tesla employees in the vehicle, is set for a cautious public launch in June if trials prove successful.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of Tesla's AI-based Full Self-Driving system in Robotaxi testing, which qualifies as an AI system. The system is currently supervised and in testing, with no reported incidents or harm. However, autonomous vehicle AI systems inherently carry risks that could plausibly lead to injury or harm in the future. Since no actual harm or incident is reported, but plausible future harm exists, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governance

Industries
Mobility and autonomous vehiclesConsumer services

Harm types
Physical (injury)Physical (death)Economic/PropertyReputational

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality controlCitizen/customer service

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

特斯拉开始测试自动驾驶出租车业务

2025-04-27
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD autonomous driving system) in active use for Robotaxi testing. However, the article does not describe any realized harm, injury, rights violation, or disruption caused by the AI system. The testing is controlled, supervised, and limited in scope, with safety drivers present to intervene if necessary. While there is potential for future harm if the system is deployed widely without adequate safety, the article does not report any such harm or near-miss incidents. Therefore, this event is best classified as Complementary Information, providing an update on AI system deployment and development without reporting an AI Incident or AI Hazard.
Thumbnail Image

马斯克杀回汽车行业:特斯拉自动驾驶出租车开测

2025-04-26
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Tesla's AI-based Full Self-Driving system in Robotaxi testing, which qualifies as an AI system. The system is currently supervised and in testing, with no reported incidents or harm. However, autonomous vehicle AI systems inherently carry risks that could plausibly lead to injury or harm in the future. Since no actual harm or incident is reported, but plausible future harm exists, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

特斯拉Robotaxi测试曝光:基于全新Model Y,马斯克将重点推进该项目

2025-04-26
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Tesla's FSD) in real-world testing for autonomous taxi services. Although the system is not fully autonomous and includes a safety driver, the deployment and testing of such AI-driven autonomous vehicles inherently carry risks of harm (e.g., potential injury or disruption) if the system malfunctions or fails. However, the article does not report any actual harm or incidents resulting from the testing; it only describes ongoing development and testing activities with safety measures in place. Therefore, this event represents a plausible future risk scenario where the AI system could lead to harm, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the testing and potential deployment of an AI system that could plausibly lead to harm, not just updates or responses to past incidents.
Thumbnail Image

马斯克杀回汽车行业:特斯拉自动驾驶出租车开测 - cnBeta.COM 移动版

2025-04-26
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) actively used in real-world autonomous navigation and passenger transport, which fits the definition of an AI system. The use is supervised, and no harm has been reported, so it is not an AI Incident. However, the deployment of such systems plausibly could lead to harm (e.g., accidents, injuries) in the future if the AI malfunctions or fails to respond appropriately. Therefore, it qualifies as an AI Hazard. The article does not focus on responses, legal actions, or updates to past incidents, so it is not Complementary Information. It is clearly related to AI, so it is not Unrelated.
Thumbnail Image

特斯拉Robotaxi测试曝光:基于全新Model Y,马斯克将重点推进该项目

2025-04-26
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD and Robotaxi autonomous driving technology) in active testing and development. However, there is no indication of any realized harm or malfunction causing injury, rights violations, or other harms. The testing is controlled with safety drivers, and the article mainly describes the progress and plans for deployment. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system development and deployment.
Thumbnail Image

马斯克杀回汽车行业:特斯拉自动驾驶出租车开测

2025-04-26
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD for Robotaxi) in active testing and use, which fits the definition of an AI system. However, the article does not report any injury, rights violation, property damage, or other harms caused by the AI system, nor does it suggest a credible risk of such harm occurring imminently. The presence of a safety driver and disclaimers indicate controlled testing rather than an incident or hazard. The main focus is on the progress and operational details of the Robotaxi service, which aligns with Complementary Information as it updates on AI system deployment and development without describing harm or credible imminent risk.
Thumbnail Image

特斯拉Robotaxi业务开启内部测试

2025-04-27
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Tesla's FSD suite) in autonomous vehicle operation, which is being tested in a controlled environment. Although no incident or harm has occurred, the nature of the AI system and its intended use in autonomous taxis could plausibly lead to harm if deployed widely without adequate safety measures. Therefore, this event qualifies as an AI Hazard due to the plausible future risk of harm from the AI system's use in autonomous driving.
Thumbnail Image

特斯拉开始测试自动驾驶出租车业务

2025-04-27
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service involves the use of an AI system (FSD) for autonomous driving, which is explicitly mentioned. The event concerns the use and testing of this AI system. No actual harm or incidents have been reported; the testing is supervised and limited to internal employees. However, the deployment of AI-driven autonomous taxis inherently carries plausible risks of harm to passengers or others if the AI malfunctions or fails, which fits the definition of an AI Hazard. Since no harm has occurred yet, and the article focuses on testing and future plans, the classification is AI Hazard rather than AI Incident.
Thumbnail Image

Tesla 在 Austin 和湾区开始员工参与的 "FSD Supervised" 网约车测试

2025-04-25
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Tesla's FSD) being used in a supervised testing context. There is no indication that the AI system has caused any harm or malfunction leading to injury, rights violations, or other harms. The testing is a standard safety procedure prior to commercial deployment. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on the development and governance of an AI system with potential future impacts but does not describe realized or imminent harm.