Baidu's Apollo Go Receives Dubai's First Fully Driverless Vehicle Testing Permit

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Baidu's autonomous ride-hailing platform, Apollo Go (萝卜快跑), has received Dubai's first permit for fully driverless vehicle testing from the Roads and Transport Authority (RTA). The company also launched its first overseas integrated operations base in Dubai, planning to expand its driverless fleet to over 1,000 vehicles. No incidents have been reported yet, but the deployment presents plausible future AI risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (fully autonomous driving) and its deployment in public spaces, which inherently carries potential risks of harm (e.g., accidents, injury). However, the article only reports the granting of a testing permit and plans for commercial service, with no mention of any actual incidents or harm caused by the AI system. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm in the future, but no direct or indirect harm has yet materialized.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governance

Industries
Mobility and autonomous vehiclesConsumer services

Affected stakeholders
ConsumersGeneral publicWorkers

Harm types
Physical (injury)Physical (death)Economic/PropertyReputationalHuman or fundamental rights

Severity
AI hazard

Business function:
Citizen/customer serviceLogistics

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

百度获得迪拜首个全无人驾驶测试许可,计划年内推出商业化Robotaxi服务

2026-01-10
finance.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (fully autonomous driving) and its deployment in public spaces, which inherently carries potential risks of harm (e.g., accidents, injury). However, the article only reports the granting of a testing permit and plans for commercial service, with no mention of any actual incidents or harm caused by the AI system. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm in the future, but no direct or indirect harm has yet materialized.
Thumbnail Image

黄仁勋回应H200何时售往中国:正加快生产,等待许可的最终细节敲定-36氪

2026-01-07
36kr.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles and their operation. However, the article does not describe any harm or incident resulting from the AI system's development, use, or malfunction. Nor does it indicate any plausible future harm or risk. It is a report on the deployment and regulatory approval of an AI system, which is informative but does not constitute an incident or hazard. Therefore, it is classified as Complementary Information, providing context and updates on AI deployment and governance.
Thumbnail Image

1月6日,百度旗下萝卜快跑正式获得迪拜道路与交通管理局(RTA)颁发的全无人驾驶测试许可,成为迪拜首个且目前唯一获准开展全无人测试的平台。

2026-01-07
stock.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving platform) whose development and use are explicitly mentioned. However, there is no indication of any harm or malfunction occurring at this time. The event describes a regulatory approval and operational expansion, which could plausibly lead to future AI incidents if issues arise during deployment, but no harm has yet occurred. Therefore, this qualifies as an AI Hazard because the autonomous driving AI system's use could plausibly lead to incidents involving injury, disruption, or other harms in the future, given the nature of autonomous vehicles and their potential risks.
Thumbnail Image

萝卜快跑拿下迪拜首个全无人测试许可,并启用首个海外基地

2026-01-07
pcauto.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) and their use and development, but no harm or malfunction is reported. The article focuses on the granting of permits, operational readiness, and strategic partnerships, which are positive developments without any realized or imminent harm. Therefore, this is not an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context and updates on AI system deployment and governance in a new region, aiding understanding of the AI ecosystem and its evolution.
Thumbnail Image

萝卜快跑正式取得迪拜全无人测试许可 同步启用首个海外运营基地-汽车频道-和讯网

2026-01-07
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles operating without human safety drivers on public roads, which directly relates to AI system use. Although no harm is reported, the deployment of fully autonomous vehicles on open roads without safety drivers plausibly could lead to incidents causing injury, disruption, or other harms if the AI systems malfunction or make errors. Therefore, this event represents a plausible future risk of harm due to AI system use, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

萝卜快跑拿下迪拜首个全无人测试许可,并启用首个海外基地

2026-01-07
pcauto.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically fully autonomous driving technology, which qualifies as an AI system under the definitions. The event concerns the use and deployment of these AI systems in Dubai with official testing permissions and operational infrastructure. However, there is no mention or indication of any injury, rights violation, disruption, or other harm caused or occurring due to the AI system. Nor does it describe any plausible risk or hazard that could lead to harm. Therefore, the event is best classified as Complementary Information, as it provides important context and updates on AI deployment and governance but does not describe an AI Incident or AI Hazard.
Thumbnail Image

中国无人车出海新里程碑!萝卜快跑拿下迪拜首个全无人测试许可,并启用首个海外基地

2026-01-07
auto.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of fully autonomous vehicles being tested and operated in a new environment (Dubai). There is no mention of any harm, injury, or rights violations caused by these AI systems so far. The event is about the initiation of testing and deployment, which could plausibly lead to incidents in the future given the nature of autonomous driving technology. Therefore, it fits the definition of an AI Hazard, as it describes a circumstance where the use of AI systems could plausibly lead to harm, but no harm has yet materialized. It is not Complementary Information because the article is not about responses or updates to past incidents, nor is it unrelated as it clearly involves AI systems and their deployment.
Thumbnail Image

萝卜快跑斩获迪拜首个全无人测试许可,首个海外基地同步启用

2026-01-07
cn.chinadaily.com.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the approval and deployment of an AI system (autonomous driving) for fully driverless vehicle testing and operation in Dubai. While the AI system is clearly involved, there is no mention or implication of any injury, rights violation, disruption, or other harm caused or occurring. The event is about the start of testing and operational readiness, not about any incident or hazard. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides important context and updates on AI system deployment and governance in a major city, enhancing understanding of the AI ecosystem without reporting harm or risk.
Thumbnail Image

全球唯一!萝卜快跑拿下迪拜全无人测试许可

2026-01-07
auto.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically autonomous driving technology, which is an AI system by definition. However, the article does not report any harm or incident resulting from the use or malfunction of the AI system. Instead, it reports a regulatory approval and operational expansion, which are positive developments. There is no indication of injury, disruption, rights violations, or other harms caused or plausibly caused by the AI system at this stage. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context and updates about the deployment and governance of AI autonomous driving systems in a new region, which is relevant for understanding the AI ecosystem and its evolution.
Thumbnail Image

萝卜快跑获阿联酋迪拜首个全无人驾驶测试许可

2026-01-08
tech.ifeng.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of fully autonomous driving technology. The permit allows operation without safety drivers, which increases the risk that AI system failures could lead to harm. Although no harm has yet occurred, the potential for injury or disruption due to AI malfunction or errors in real-world public road conditions is credible. Hence, this is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

萝卜快跑获迪拜首个全无人测试许可 车队规模将超1000辆

2026-01-08
auto.cnfol.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous driving technology) and their use (full unmanned testing and operation). However, there is no indication of any realized harm, malfunction, or violation of rights. The event is about the initiation and expansion of AI system testing and deployment, which could plausibly lead to future harm but no harm is reported or implied as having occurred yet. Therefore, this qualifies as an AI Hazard due to the plausible future risks associated with large-scale autonomous vehicle deployment, but not an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident or hazard, nor is it unrelated as it clearly involves AI systems.
Thumbnail Image

萝卜快跑获迪拜全无人驾驶测试许可并启用海外首座运营基地

2026-01-08
dt.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The article reports on the approval and commencement of fully autonomous vehicle operations without safety drivers, which involves AI systems making real-time driving decisions. Although no harm has yet occurred, the deployment of such AI systems on public roads without human safety drivers plausibly could lead to incidents causing injury or harm to people or property. Therefore, this event represents a credible potential risk of AI-related harm in the near future, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Baidu's Apollo Go Secures Dubai's First Fully Driverless Testing Permit, Launches Local Operations Hub

2026-01-07
stockwatch.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (autonomous vehicles with AI driving technology) and discusses its use in public road trials without safety drivers, which inherently carries risks of harm (e.g., accidents, injury). However, the article does not mention any actual incidents, malfunctions, or harms caused by the AI system so far. The event is about the initiation of fully driverless testing and operational expansion, which could plausibly lead to AI-related incidents in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu's Apollo Go Secures Dubai's First Fully Driverless Testing Permit, Launches Local Operations Hub

2026-01-07
Barchart.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous vehicles) and its use (deployment and testing) but does not report any injury, rights violation, disruption, or harm caused or imminent. The permit and operations hub indicate progress and regulatory acceptance rather than harm or credible risk of harm. The article focuses on the expansion and regulatory framework, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Apollo Go secures Dubai's first fully driverless testing permit

2026-01-07
chinadaily.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (autonomous driving technology) and its use (testing and planned deployment). However, there is no indication of any injury, disruption, rights violation, or other harm caused or occurring due to the AI system. The event is about the granting of a permit and the establishment of infrastructure to support autonomous vehicle operations, which is a development and governance milestone. Since no harm has occurred yet, but the system's deployment could plausibly lead to harm in the future, this fits the definition of an AI Hazard rather than an Incident. It is not Complementary Information because it is not an update or response to a prior incident or hazard, and it is not Unrelated because it clearly involves AI systems and their deployment.
Thumbnail Image

Baidu Unit Receives Dubai Trial Permit for Fully Driverless Robotaxis

2026-01-07
Morningstar, Inc.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles) and their deployment, but there is no indication of any realized harm or direct/indirect incidents caused by the AI systems. There is also no explicit or implicit mention of plausible future harm or hazards. Therefore, the event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on AI deployment and industry expansion without reporting harm or risk.
Thumbnail Image

Baidu Apollo Go Autonomous Vehicles Operations And Control Centre Opened In Dubai

2026-01-08
UrduPoint
Why's our monitor labelling this an incident or hazard?
While the event involves AI systems (autonomous vehicles and their management), there is no indication of any injury, rights violation, disruption, or other harm caused or occurring. The article focuses on the launch and operational readiness, regulatory compliance, and future plans, without mentioning any realized or potential harm. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI deployment in autonomous vehicles in Dubai.
Thumbnail Image

Autonomous vehicle operations hub opens in Dubai

2026-01-08
Dubai Eye 103.8
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles, which are explicitly mentioned and described as being managed and operated from the new centre. No actual harm or incident is reported, but the deployment of fully driverless vehicles on public roads inherently carries plausible risks of harm to people or property. Since no harm has yet occurred, but the potential exists, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the operational setup and expansion plans, not on any realized harm or legal/governance responses, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Dubai RTA reveals Phase 1 rollout of driverless RoboTaxis across 65 locations

2026-01-09
Gulf News: Latest UAE news, Dubai news, Business, travel news, Dubai Gold rate, prayer time, cinema
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous driving AI in RoboTaxis) but does not describe any harm or incident resulting from their use or malfunction. It reports on the deployment and trials, which is informative about AI adoption but does not indicate an incident or hazard. Therefore, this is complementary information about AI system deployment and testing, not an incident or hazard.
Thumbnail Image

Dubai to launch driverless RoboTaxis across 65 locations in first phase - The Economic Times

2026-01-09
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in autonomous vehicles (Baidu Apollo Go) and their trials on public roads. However, there is no indication of any injury, disruption, rights violation, or other harm caused by these AI systems so far. The event concerns the development and planned use of AI-enabled autonomous taxis, which could plausibly lead to harm in the future if issues arise, but currently no harm has materialized. Therefore, this qualifies as an AI Hazard, reflecting the credible potential for future harm from the deployment of autonomous vehicles, rather than an incident or complementary information.
Thumbnail Image

蘿蔔快跑獲杜拜首個全無人測試許可,杜拜

2026-01-07
am730
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems (autonomous driving technology) in a real-world environment. Although no harm is reported, the testing of fully driverless vehicles on public roads without safety drivers plausibly could lead to incidents causing injury, disruption, or other harms if the AI systems malfunction or fail. Therefore, this event represents a credible potential risk of harm from AI systems, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

12:09:23運輸署發出6個自動車先導牌照 供62輛自動駕駛私家車和小巴測試

2026-01-11
hkcd.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) in active use for testing purposes. However, the article does not report any realized harm or incidents resulting from these AI systems. Instead, it highlights ongoing development, safety measures, and improvements in AI capabilities. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information that provides context and updates on AI system deployment and governance in the transportation sector.
Thumbnail Image

蘿蔔快跑獲迪拜首個全無人駕駛測試許可 (19:18) - 20260107 - 即時財經新聞

2026-01-07
明報財經網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of fully autonomous vehicles (AI systems) being tested on public roads in Dubai under official permits. Autonomous driving AI systems have inherent risks that could plausibly lead to harm (e.g., accidents causing injury or disruption). Since no actual harm or incident is reported, but the event involves real-world deployment and testing of AI systems with potential for harm, it fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems and their deployment.
Thumbnail Image

百度自動駕駛"蘿蔔快跑"落地迪拜 | 電訊

2026-01-07
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems (autonomous driving technology) in a real-world setting with potential safety implications. However, the article does not report any actual harm, malfunction, or incident caused by the AI system. It describes regulatory approval and operational plans, which indicate potential future risks but no realized harm yet. Therefore, this qualifies as an AI Hazard because the development and deployment of fully autonomous vehicles could plausibly lead to incidents or harms in the future, but no harm has yet occurred as per the article.
Thumbnail Image

小馬智行:計劃進軍香港自動駕駛市場  21:44

2026-01-09
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems for autonomous driving, which are explicitly mentioned. However, the article describes plans and ongoing operations without reporting any harm or incidents caused by these AI systems. There is no indication of injury, rights violations, infrastructure disruption, or other harms. The content focuses on business expansion and policy environment, which is informative but does not describe an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context on AI system deployment and market expansion without direct or potential harm.
Thumbnail Image

智駕未來 全速驅動

2026-01-10
Hong Kong's Information Services Department
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) in active use and testing. However, there is no indication that any harm or incident has occurred due to the AI systems' malfunction or misuse. The article focuses on the development, testing, and regulatory framework to ensure safety, with no realized harm reported. Therefore, this event represents a plausible future risk scenario where autonomous vehicles could potentially cause harm if issues arise, but currently, it is a controlled testing environment with safety measures in place and no harm reported. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

自動車今年推無人化 測試範圍擴展 車速上限升至50公里

2026-01-11
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving systems) in active use and testing. However, the article does not report any realized harm or incidents caused by these AI systems. Instead, it discusses the expansion of testing scope, technical progress, and safety measures. There is no indication of injury, property damage, rights violations, or other harms. The article also does not describe a credible imminent risk of harm but rather ongoing controlled testing with safety oversight. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system deployment and governance without reporting an AI Incident or AI Hazard.
Thumbnail Image

黃仁勳 CES 2026 定調「會思考的自駕 AI」Alpamayo 登場,鴻海、廣達躍升 Tier 1 夥伴

2026-01-11
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Alpamayo) designed for autonomous driving, which is a safety-critical domain where AI malfunction or misuse could lead to injury or harm. Although no actual harm or incident is reported, the deployment of such AI systems inherently carries plausible risks of future harm. The announcement and description of the system's capabilities and planned deployment fit the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. The article does not report any current harm or legal/governance responses, so it is not an AI Incident or Complementary Information. It is not unrelated because it concerns a specific AI system with potential safety implications.
Thumbnail Image

自動駕駛真的不需要人?從安全監控到法規遵循,人類仍是關鍵一環

2026-01-11
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article does not report any specific incident or harm caused by AI systems, nor does it describe a plausible future harm event. Instead, it provides an overview of the current limitations and human roles in autonomous driving AI systems, as well as regulatory and operational contexts. This constitutes complementary information that enhances understanding of the AI ecosystem and its challenges, rather than describing an AI Incident or AI Hazard.
Thumbnail Image

自動車測試車速上限增至時速50公里 運輸署:車上後備操作員今年改遙距 | am730

2026-01-11
am730
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving vehicles with machine learning models) in active use and development. However, the article focuses on the progress and operational improvements without any mention of accidents, injuries, rights violations, or other harms. There is no indication that the AI systems have caused or are likely to cause harm imminently. Hence, it does not qualify as an AI Incident or AI Hazard. Instead, it provides contextual information about AI deployment and regulatory progress, fitting the definition of Complementary Information.
Thumbnail Image

無人駕駛|運輸署:年內推遙距後備操作 測試車速升至時速50公里

2026-01-11
香港01
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) in active use and development, but there is no indication of any injury, rights violation, property damage, or other harm caused by these systems. The article highlights improvements and safety oversight, with no mention of accidents or malfunctions leading to harm. Therefore, this is not an AI Incident. It also does not describe a plausible future harm or risk scenario beyond normal testing, so it is not an AI Hazard. The article provides contextual and developmental information about AI deployment and governance, fitting the definition of Complementary Information.
Thumbnail Image

2026-01-11
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous vehicles, which are AI systems that make real-time decisions for navigation and control. The testing and adaptation to local road rules indicate active use of AI systems. However, there is no mention of any harm or incident resulting from these tests. The event is about the development and deployment phase with no realized or imminent harm reported. Therefore, it does not qualify as an AI Incident or AI Hazard. It provides contextual information about AI system deployment and testing, which fits the definition of Complementary Information.
Thumbnail Image

自動駕駛|今年測試無人自動車 後備操作員遙距監察

2026-01-11
香港經濟日報HKET
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles currently under testing, with plans to move towards fully unmanned operation monitored remotely. No actual harm or incident is reported, but the nature of autonomous driving AI systems and the shift to remote backup operators imply a credible risk of future harm (e.g., accidents, safety failures). Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.