Baidu to Test Apollo Go Robotaxis in Europe Amid Global Expansion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Baidu plans to launch its Apollo Go driverless taxi service in Europe, beginning tests in Switzerland and Turkey by year-end. The Chinese tech giant is in talks with Swiss Post to set up a local entity and navigate conservative Level-4 AV regulations, as it faces competition from Uber, Tesla, and Waymo.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the planned use of an AI system (autonomous driverless taxis) in new geographic regions. While the article does not report any harm or incidents resulting from this deployment, the introduction of AI-driven robotaxis in public spaces carries plausible risks of harm such as accidents or operational failures. Therefore, this situation represents a potential future risk rather than a realized harm. As no actual harm or incident has occurred yet, and the focus is on the preparation and planned launch, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governanceFairnessRespect of human rights

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareGovernment, security, and defenceIT infrastructure and hosting

Harm types
Physical (injury)Physical (death)Human or fundamental rightsReputationalEconomic/PropertyPublic interest

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality control

AI system task:
Recognition/object detectionForecasting/predictionEvent/anomaly detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Baidu prepares to launch driverless taxi in Europe, WSJ reports

2025-05-14
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event describes the planned use of an AI system (autonomous driverless taxis) in new geographic regions. While the article does not report any harm or incidents resulting from this deployment, the introduction of AI-driven robotaxis in public spaces carries plausible risks of harm such as accidents or operational failures. Therefore, this situation represents a potential future risk rather than a realized harm. As no actual harm or incident has occurred yet, and the focus is on the preparation and planned launch, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu's robotaxi unit plans Europe expansion

2025-05-14
CNBC
Why's our monitor labelling this an incident or hazard?
The article describes the use of fully autonomous robotaxis, which are AI systems performing real-time decision-making in public transportation. Although no harm or incident is reported, the deployment of such systems in new regions could plausibly lead to incidents involving injury or disruption. Since the article focuses on the planned expansion and does not report any realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

China's Baidu in talks to launch robotaxis in Europe

2025-05-14
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving robotaxis) and their planned deployment, which could plausibly lead to future harms related to safety, security, or regulatory compliance. However, the article does not report any actual incidents, malfunctions, or harms caused by these AI systems. Therefore, it does not meet the criteria for an AI Incident. Since the article focuses on plans and discussions about deployment without describing a specific hazard event or credible near miss, it does not qualify as an AI Hazard either. The content primarily provides contextual information about the AI ecosystem, market competition, and regulatory concerns, fitting the definition of Complementary Information.
Thumbnail Image

China's Baidu Plans Robotaxi Expansion to Europe and Turkey

2025-05-14
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to deploy an AI system (autonomous ride-hailing service) in new regions. Although no incident or harm has occurred yet, the use of autonomous vehicles inherently involves AI systems that could plausibly lead to incidents such as accidents or disruptions. Therefore, this event qualifies as an AI Hazard due to the credible potential for future harm stemming from the AI system's use.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-14
Legit.ng - Nigeria news.
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in self-driving taxis (robotaxis) being tested and deployed in various countries. Autonomous driving systems are AI systems that make real-time decisions affecting physical environments and human safety. Although no harm or incidents are reported, the planned testing and deployment of these systems in new regions could plausibly lead to AI incidents such as accidents or disruptions. Hence, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Baidu prepares to launch driverless taxi in Europe

2025-05-14
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of an AI system (autonomous driverless taxis) in a new geographic area. Although no harm or incident has occurred yet, the deployment of autonomous vehicles inherently carries risks that could plausibly lead to harm, such as accidents or operational failures. Therefore, this situation qualifies as an AI Hazard due to the credible potential for future harm stemming from the use of AI in driverless taxis.
Thumbnail Image

Baidu could start testing its Apollo Go robotaxi service in Europe this year

2025-05-14
engadget
Why's our monitor labelling this an incident or hazard?
The article discusses the planned deployment and testing of Baidu's autonomous vehicle AI system in new regions. While autonomous vehicles are AI systems with potential safety risks, the article does not report any actual harm, malfunction, or incident caused by the AI system. The presence of drivers during initial tests further reduces immediate risk. Therefore, this is not an AI Incident. It also does not describe a credible or imminent risk of harm, only plans for testing and expansion, so it does not meet the threshold for an AI Hazard. The article is best classified as Complementary Information as it provides context and updates on the deployment of an AI system without reporting harm or credible risk of harm.
Thumbnail Image

Baidu prepares to launch driverless taxi in Europe, WSJ reports

2025-05-14
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event describes the planned deployment and testing of an AI system (driverless taxi service) in new geographic locations. There is no indication that any harm has occurred or that there is an imminent risk of harm. The article focuses on the preparation and intention to launch the service, which could plausibly lead to future harms related to autonomous vehicle operation (e.g., accidents, safety issues), but no incident or harm is reported at this stage. Therefore, this qualifies as an AI Hazard due to the plausible future risk associated with deploying autonomous vehicles in new regions.
Thumbnail Image

China's Baidu plans driverless taxi expansion to Europe, Turkey: report - Turkish Minute

2025-05-14
turkishminute.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving technology) in a real-world application (robotaxi service). Although no actual harm or incident has been reported, the article discusses the upcoming testing and deployment, as well as potential regulatory and safety concerns that imply a credible risk of future harm. Since the harm is not realized but plausible, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu plans self-driving taxi tests in Türkiye, Europe this year

2025-05-14
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's plans to test AI-powered autonomous taxis in new regions. These self-driving taxis are AI systems that make real-time decisions affecting physical environments. Although no harm has yet occurred, the nature of autonomous vehicle technology inherently carries risks of accidents or operational failures that could lead to injury or other harms. Since the event concerns planned testing and deployment without reported incidents, it fits the definition of an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

China's Baidu Plans Robotaxi Expansion to Europe and Turkey

2025-05-14
SWI swissinfo.ch
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to test and launch an AI-powered autonomous ride-hailing service in new regions. Although no harm or malfunction has been reported yet, the nature of autonomous vehicles inherently involves potential safety risks and social impacts, such as job displacement and accidents. Since these risks are plausible but not yet realized, the event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-14
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to deploy AI-powered autonomous taxis in Europe and other regions. Although no harm or malfunction has been reported yet, the use of self-driving vehicles inherently involves AI systems that could plausibly lead to incidents such as accidents or safety issues. Since the event concerns planned testing and deployment with potential for future harm but no realized harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Baidu's robotaxi unit plans Europe expansion

2025-05-14
NBC 6 South Florida
Why's our monitor labelling this an incident or hazard?
The article describes the use of fully driverless robotaxis, which are AI systems operating autonomously in public spaces. Although no harm or incident is reported, the deployment of such AI systems in new regions could plausibly lead to incidents involving injury, disruption, or other harms. Since the article focuses on the planned expansion and not on any realized harm or incident, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

China's Baidu is planning to launch self-driving robotaxis in Europe

2025-05-14
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article primarily covers plans for deployment and market growth of AI-powered autonomous vehicles (robotaxis) by Baidu and others, which involves AI systems. However, it does not describe any actual harm or incident caused by these AI systems. The mention of a past crash involving a different company's assisted-driving system is background context, not a new incident. The article also discusses potential security concerns and regulatory scrutiny, which are relevant but do not constitute a direct or indirect harm event. Therefore, the event is best classified as Complementary Information, providing context and updates on AI system deployment and related governance issues, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Baidu Plans European Robotaxi Launch, Starting with Switzerland

2025-05-15
Technology Org
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to deploy an AI-powered autonomous vehicle system (Apollo Go) in Europe. Autonomous vehicles are AI systems that make real-time decisions affecting physical environments and human safety. Although no incidents or harms have occurred yet, the deployment of such systems inherently carries plausible risks of harm (e.g., accidents, operational failures). Since the event concerns planned deployment and testing, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-14
KTBS
Why's our monitor labelling this an incident or hazard?
The article describes the planned deployment and testing of AI-powered autonomous taxis by Baidu in Europe and other regions. Autonomous driving systems are AI systems that make real-time decisions affecting physical environments and safety. Although no harm or malfunction is reported, the nature of autonomous vehicles means there is a credible risk of future incidents involving injury, disruption, or other harms. Since the event concerns planned testing with plausible future risks but no realized harm, it fits the definition of an AI Hazard.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-14
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to test self-driving taxis, which are AI systems capable of autonomous navigation and decision-making. Although no incident or harm has occurred yet, the introduction of autonomous vehicles in new regions carries plausible risks of accidents or other harms due to AI system failures or errors. Since the event concerns a planned deployment with potential for harm but no realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-15
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) and their planned use (testing and deployment of robotaxis). While the deployment of self-driving taxis carries potential risks (e.g., accidents, safety hazards), the article only discusses future plans and ongoing preparations without any reported incidents or harms. Therefore, this qualifies as an AI Hazard because the use of these AI systems could plausibly lead to harm in the future, but no harm has yet occurred or been reported. It is not an AI Incident, as no harm has materialized, nor is it Complementary Information or Unrelated, since the focus is on the potential risks of AI deployment in autonomous vehicles.
Thumbnail Image

Baidu plans self-driving taxi tests in Europe this year

2025-05-14
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (self-driving taxis) and their deployment, but there is no indication of any realized harm or incident caused by these AI systems. The article focuses on the expansion and testing plans, which is informative about the AI ecosystem but does not describe any harm or plausible harm occurring or imminent. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI system deployment without reporting an incident or hazard.
Thumbnail Image

China's tech giant Baidu eyes Europe for Robotaxi launch

2025-05-14
Today.Az
Why's our monitor labelling this an incident or hazard?
Baidu's Apollo Go is an AI system for autonomous driving, so its development and intended use involve AI. The article focuses on plans and early discussions for expansion into Europe, with no current deployment or harm. Since no harm has occurred yet, but the deployment of autonomous vehicles could plausibly lead to incidents in the future, this qualifies as an AI Hazard. There is no indication of realized harm or incident, nor is the article primarily about responses or governance measures, so it is not Complementary Information. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Baidu speeds up global expansion with driverless taxi trials in Europe and Turkey

2025-05-14
るなてち
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles with AI for navigation and ride-hailing) and their planned use, but no harm or incident has occurred yet. The article highlights regulatory challenges and the need for approvals, which indicates potential future risks, but these are not described as imminent or realized harms. Therefore, the event is best classified as an AI Hazard because the deployment of autonomous vehicles could plausibly lead to incidents in the future, but no incident has yet occurred. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since it clearly involves AI systems and their deployment.
Thumbnail Image

China's Baidu plans to launch driverless taxis in Europe

2025-05-14
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI in robotaxis) and their use (deployment of driverless taxis). However, no actual harm or incident is reported in the article. The deployment is planned and ongoing in China, with expansion to Europe forthcoming. While autonomous taxis have inherent risks, the article does not describe any realized harm or direct incident caused by Baidu's AI system. Therefore, this is a plausible future risk scenario but without specific evidence of harm or incident at this time. Given the imminent deployment and the known risks of autonomous vehicles, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu's European robotaxi ambitions start with Switzerland and Turkey

2025-05-14
ArenaEV.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (autonomous driving technology and robotaxis) and their use, but it does not report any realized harm or incident. It also does not highlight any specific credible risk or near-miss event that would qualify as an AI Hazard. Instead, it provides updates on the deployment plans and partnerships in the autonomous vehicle sector, which fits the definition of Complementary Information as it enhances understanding of AI developments and ecosystem evolution without describing a new incident or hazard.
Thumbnail Image

Baidu to test Apollo Go robotaxis in Europe as global expansion continues

2025-05-14
Dimsum Daily
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (autonomous driving and robotaxi service) and discusses its planned use and testing in new regions. However, it does not report any actual harm, malfunction, or misuse resulting from the AI system. The mention of regulatory challenges and safety trials indicates awareness of potential risks but does not describe any realized or imminent harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual and strategic information about AI deployment and market expansion, fitting the definition of Complementary Information.
Thumbnail Image

Baidu Accelerates Robotaxi Expansion and Opens the European Door for NASDAQ:BIDU by ActivTrades

2025-05-16
TradingView
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous vehicle AI for robotaxis, but it primarily reports on planned expansion and market competition without any indication of harm or malfunction. There is no evidence of injury, rights violations, infrastructure disruption, or other harms caused or plausibly caused by the AI systems. The focus is on business strategy and investment context, which fits the definition of Complementary Information as it enhances understanding of AI ecosystem developments without describing a new incident or hazard.
Thumbnail Image

Chinesen bringen autonome Taxis nach Europa, erste Tests in Schweiz

2025-05-14
m.winfuture.de
Why's our monitor labelling this an incident or hazard?
The article describes the planned deployment and testing of an AI system—autonomous taxis—in a new market. While the system is AI-based and involves autonomous operation, the article does not report any realized harm or incidents resulting from the AI system's use. It discusses regulatory compliance and operational plans but no accidents, malfunctions, or rights violations. Therefore, this event represents a plausible future risk scenario (AI Hazard) rather than an incident or complementary information about a past event.
Thumbnail Image

Chinas IT-Riese: Baidu plant Expansion seiner Robotaxis nach Europa

2025-05-14
Focus
Why's our monitor labelling this an incident or hazard?
The article describes Baidu's intention to deploy autonomous robotaxis in Europe, which are AI systems capable of autonomous navigation and decision-making. Although no incident or harm has occurred yet, the expansion of such AI systems into new territories plausibly could lead to AI incidents such as accidents or safety issues. Since the event concerns a planned deployment with potential for harm but no realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baidu will Robotaxidienst Apollo Go nach Europa bringen

2025-05-14
heise online
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving software) and its deployment in new regions, which could plausibly lead to harm if safety issues arise. However, since no harm or malfunction has occurred yet, and the article focuses on plans and regulatory considerations rather than an incident or realized harm, this qualifies as an AI Hazard. The potential for future harm exists due to the nature of autonomous vehicles, but no direct or indirect harm has been reported so far.
Thumbnail Image

Chinesen wollen Schweizer Taxis verdrängen - ohne Chauffeur!

2025-05-15
Blick.ch
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned deployment of an AI system—autonomous taxis powered by AI for navigation and control. However, the article only describes intentions, plans, and regulatory context without reporting any actual harm or incidents caused by these AI systems. There is no indication that any injury, rights violation, or other harm has occurred or that a specific hazard event (such as a near miss or credible risk materializing) has taken place. The concerns about data privacy and safety are noted as considerations but not as realized harms or imminent hazards. Therefore, this is best classified as Complementary Information, providing context and updates on AI system deployment and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

Chinesen bringen autonome Taxis nach Europa, erste Tests in Schweiz

2025-05-14
WinFuture.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving AI) and its planned use (deployment of autonomous taxis). However, the article does not describe any realized harm or incident caused by the AI system, nor does it indicate a plausible risk of harm occurring imminently. It mainly provides information about the expansion plans, regulatory environment, and operational details, which fits the definition of Complementary Information. There is no direct or indirect harm reported, nor a credible warning of potential harm that would qualify as an AI Hazard.
Thumbnail Image

Baidu startet selbstfahrende Taxis in Europa: Ein neuer Schritt in der Mobilität

2025-05-14
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous taxis using Baidu's Apollo AI platform) being deployed in a new region. However, there is no mention of any accidents, malfunctions, or harms caused by the AI system. The article focuses on the introduction and testing phase, regulatory considerations, and potential future developments. Since no harm has occurred yet but there is a plausible risk inherent in deploying autonomous vehicles, this qualifies as an AI Hazard rather than an Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems.
Thumbnail Image

Baidu asesta un nuevo golpe a Tesla: lanzará antes sus taxis sin conductor en Europa

2025-05-14
Cinco Días
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) and their use (deployment and testing of robotaxis). However, the article does not describe any direct or indirect harm caused by these AI systems, nor does it indicate any plausible imminent harm or hazards. It is primarily an update on the development and expansion of AI-driven autonomous vehicle services, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting new incidents or hazards.
Thumbnail Image

Baidu se adelanta a Tesla: planea lanzar su servicio de robotaxis sin conductor en Europa antes de final de año

2025-05-14
El Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous robotaxis) and their planned use, which could plausibly lead to harm (e.g., accidents, injuries) given the nature of autonomous driving technology. However, the article does not report any actual harm or incidents resulting from Baidu's robotaxis in Europe or other new locations. The mention of Amazon's past accident is background context. Therefore, this is a potential risk scenario rather than a realized harm. The article primarily provides information about upcoming deployments and industry developments, which fits the definition of an AI Hazard or Complementary Information. Since the article focuses on plans and expansions without describing a specific incident or harm, and includes broader context about the AI ecosystem and regulatory environment, it is best classified as Complementary Information rather than a direct hazard or incident.
Thumbnail Image

Tesla confía en el robotaxi como su próximo negocio milmillonario. China ya está en conversaciones para adelantarse en Europa

2025-05-16
Xataka
Why's our monitor labelling this an incident or hazard?
The article centers on the development and planned deployment of AI-powered autonomous vehicles (robotaxis), which are AI systems by definition. However, it does not describe any actual harm or incident resulting from their use or malfunction. Instead, it reports on ongoing trials, business strategies, regulatory environments, and market competition. There is no mention of accidents, injuries, rights violations, or other harms caused by these AI systems. Therefore, the event is best classified as Complementary Information, as it provides context and updates on AI system development and deployment without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Baidu prepara el lanzamiento de un taxi sin conductor en Europa

2025-05-15
El Economista
Why's our monitor labelling this an incident or hazard?
Baidu's Apollo Go is an AI system for autonomous taxis. The article describes plans to test and deploy this system in Europe, which involves the use of AI in real-world transportation. Although no harm has been reported, the nature of autonomous vehicles means there is a credible risk that the AI system could lead to incidents causing injury or disruption. Since the harm is potential and not yet realized, this qualifies as an AI Hazard rather than an AI Incident. The other parts of the article unrelated to Baidu's autonomous taxis do not affect this classification.
Thumbnail Image

Los taxis sin conductor llegan a Europa

2025-05-16
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and planned use of autonomous taxis (robotaxis) which are AI systems performing complex real-time decision-making for navigation and passenger transport. However, the article does not report any realized harm or incidents caused by these AI systems. It highlights regulatory and safety challenges as potential hurdles but does not describe any accidents, injuries, rights violations, or other harms that have occurred. Therefore, the event is best classified as an AI Hazard because the autonomous vehicles' use could plausibly lead to incidents or harms in the future, given the regulatory and safety concerns, but no harm has yet materialized. It is not Complementary Information because the focus is not on responses or updates to past incidents, nor is it Unrelated since AI systems are central to the event.
Thumbnail Image

百度自动驾驶车瞄准欧洲市场 拟年底前在瑞士启动测试

2025-05-14
早报
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (autonomous driving technology) and discusses its planned use and testing. However, there is no indication of any harm, malfunction, or violation caused by the AI system at this stage. The event is about the potential future deployment of an AI system that could plausibly lead to incidents if problems arise, but no such harm or incident is reported or implied as imminent. Therefore, it qualifies as an AI Hazard because the deployment of autonomous vehicles carries plausible risks of harm in the future, even though no harm has yet occurred.
Thumbnail Image

中国拟在欧洲推出机器人出租车

2025-05-14
英国金融时报中文版
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (autonomous driving robotaxis), but no harm has occurred or is reported to have occurred. The article focuses on the expansion and testing plans, which could plausibly lead to future AI incidents if problems arise, but currently, it is only a plan without realized harm. Therefore, it qualifies as an AI Hazard because the deployment of autonomous vehicles could plausibly lead to incidents in the future, but no incident has yet occurred. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since it involves AI systems.
Thumbnail Image

百度自动驾驶出租车部门计划拓展欧洲市场

2025-05-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in fully autonomous taxis (Apollo Go) currently operating in China and planning expansion to Europe. Although no harm or incident is described, the deployment of autonomous vehicles inherently carries risks that could plausibly lead to incidents involving injury or disruption. Since the event concerns the planned expansion and operation of AI-driven autonomous taxis, it fits the definition of an AI Hazard due to the credible potential for harm in the future. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information as it focuses on the expansion plan with implications for future risk, nor is it unrelated.
Thumbnail Image

美股异动 | 百度(BIDU.US)涨逾2% 传Apollo Go自动驾驶出租车业务计划今年进军欧洲

2025-05-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The article describes the use and planned expansion of an AI system (Apollo Go autonomous driving) that currently operates fully driverless taxi services in China and plans to expand to Europe and other regions. No actual harm or incident is reported, but the deployment of autonomous vehicles inherently carries potential risks of accidents or other harms. Since the event concerns the planned use of AI systems that could plausibly lead to harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Türkiye'de sürücüsüz taksiyi deneyecek

2025-05-15
Milliyet
Why's our monitor labelling this an incident or hazard?
The event describes the planned deployment and testing of an AI system (autonomous taxi service) in Turkey and Europe. However, it does not report any harm or incident resulting from the AI system's development or use, nor does it indicate any realized or imminent harm. The article is about the upcoming testing and expansion of AI technology, which is a development update without mention of harm or risk leading to harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context and updates on AI system deployment and development.
Thumbnail Image

ABD'li gazeteden ilginç iddia: Çinli firma Türkiye'de sürücüsüz taksi hizmetini deneyecek! - Sözcü Gazetesi

2025-05-14
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving technology) in a real-world setting (robotaxi service). However, the article only discusses plans and preparations for testing and deployment, with no indication that any harm or incident has occurred or that there is an imminent risk of harm. Therefore, it describes a potential future use of AI but does not report any realized harm or credible immediate risk of harm. This fits the definition of an AI Hazard, as the deployment of autonomous vehicles could plausibly lead to incidents in the future, but no incident has yet occurred.
Thumbnail Image

Sarı taksilere kötü haber: Sürücüsüz taksiler Türkiye'ye geliyor!

2025-05-14
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) and their planned use, which could plausibly lead to harm such as accidents or safety issues in the future. However, since no harm or incident has yet occurred, and the article mainly discusses the upcoming deployment and company background, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems with potential safety implications.