Rimac to launch Verne autonomous robo-taxi in Europe by 2026

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Croatian electric carmaker Rimac, led by CEO Mate Rimac, is developing Verne—a fully autonomous, two-seat electric robo-taxi built on a dedicated platform using Mobileye Drive’s AI with lidar and cameras. Scheduled to debut in Zagreb and German cities from 2026, the service could pose future safety hazards inherent to driverless vehicles.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as the autonomous driving software and sensor suite enabling the robotaxi to operate without a human driver. The article focuses on the development and planned use of this AI system in public transport. No actual harm or incident is reported; the article discusses potential challenges and the ambitious timeline for deployment. Given the nature of autonomous vehicles and their potential to cause injury or disruption if malfunctioning, this qualifies as an AI Hazard—an event where AI use could plausibly lead to harm in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since the AI system and its potential impact are central to the report.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityPrivacy & data governanceHuman wellbeingRespect of human rights

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (injury)Physical (death)

Severity
AI hazard

Business function:
ManufacturingResearch and developmentMonitoring and quality controlCitizen/customer service

AI system task:
Recognition/object detectionForecasting/predictionReasoning with knowledge structures/planningGoal-driven organisationEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Rimac erdenkt ein autonomes Shuttle

2024-06-26
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the autonomous driving software and sensor suite enabling the robotaxi to operate without a human driver. The article focuses on the development and planned use of this AI system in public transport. No actual harm or incident is reported; the article discusses potential challenges and the ambitious timeline for deployment. Given the nature of autonomous vehicles and their potential to cause injury or disruption if malfunctioning, this qualifies as an AI Hazard—an event where AI use could plausibly lead to harm in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated since the AI system and its potential impact are central to the report.
Thumbnail Image

Robotaxi Rimac Verne: Autonomer Frühstarter

2024-06-26
heise online
Why's our monitor labelling this an incident or hazard?
The Rimac Verne is an AI system (an autonomous vehicle) planned for deployment as a robotaxi. While the article discusses the potential for autonomous driving in complex urban environments, it does not report any actual harm, malfunction, or violation of rights caused by the AI system. The deployment is prospective, and the article emphasizes the preparation and expected rollout rather than any realized incident or hazard. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system development and deployment without describing an AI Incident or AI Hazard.
Thumbnail Image

Wie der "kroatische Musk" Europa mit Robotaxis ausrüsten will

2024-06-27
der Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-driven autonomous vehicle technology for robotaxis, which qualifies as an AI system. The event concerns the development and planned use of these AI systems for public transportation starting in 2026. No actual harm or malfunction has occurred yet; the robotaxi even failed to appear during a demonstration, indicating developmental challenges but no harm. Given the nature of autonomous vehicles, there is a plausible risk that their deployment could lead to incidents causing injury, disruption, or other harms in the future. Hence, this is best classified as an AI Hazard, reflecting the credible potential for harm once the system is in use.
Thumbnail Image

Robotaxi Verne: Fahrerloses Erlebnis ohne Lenkrad startet in Europa

2024-06-26
Neue Zürcher Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: an autonomous vehicle using AI for self-driving. The article focuses on the development and planned deployment of this AI system as a robotaxi service. There is no mention of any injury, disruption, rights violation, or other harm caused by the AI system so far. The article discusses the vision and near-future plans, implying that the AI system could plausibly lead to incidents or harms in the future (e.g., accidents, privacy issues), but none have materialized yet. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its deployment are central to the story.
Thumbnail Image

Rimac plant Flotte von Robotaxen

2024-06-26
HNA
Why's our monitor labelling this an incident or hazard?
The Rimac Verne robotaxi is an AI system involving autonomous driving capabilities. The article discusses its upcoming use in multiple cities, implying real-world operation. Although no incident or harm has yet occurred, the autonomous nature of the vehicle and its intended public deployment mean it could plausibly lead to injury, property damage, or other harms if the AI malfunctions or fails. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Rimac plant Flotte von Robotaxen

2024-06-26
Freie Presse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (autopilot with sensors) used for autonomous taxis. No harm or incident is reported; the vehicles are under development or planned deployment. Given the nature of autonomous vehicles, there is a credible risk that their operation could lead to injury or other harms in the future. Hence, this is an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI is central to the system described.
Thumbnail Image

Rimac-Robotaxi: Autonomer Fahrdienst Verne soll 2026 starten

2024-06-27
ecomento.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: autonomous driving software with multiple sensors and AI-based decision-making. The article focuses on the introduction and planned launch of the robotaxi service, with no mention of any harm, malfunction, or misuse. Since the service is not yet operational and no harm has occurred, it fits the definition of an AI Hazard, as the autonomous system could plausibly lead to harm in the future (e.g., accidents, safety issues). It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since AI is central to the system described.
Thumbnail Image

Auch für Deutschland: Rimac plant Flotte von Robotaxen

2024-06-26
Trierischer Volksfreund. Die Zeitung für die Region Trier/Mosel
Why's our monitor labelling this an incident or hazard?
The described robotaxi is an AI system as it uses sensors and an autopilot to operate without a driver. The event concerns the planned deployment of these AI-powered autonomous vehicles, which could plausibly lead to harm such as accidents or injuries if the AI malfunctions or fails. Since no actual harm or incident is reported yet, but the potential for harm is credible, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Mate Rimac enthüllt autonomes Luxus-Taxi Verne

2024-06-26
Elektroauto-News.net
Why's our monitor labelling this an incident or hazard?
The Rimac Verne is an AI system as it is a fully autonomous vehicle relying on AI technology (Mobileye Drive) for navigation and operation. The article discusses the planned use of this AI system in public urban environments starting in 2026. Although no incident or harm has been reported yet, the deployment of autonomous taxis inherently carries plausible risks of harm (e.g., accidents, safety failures). Since the event concerns the unveiling and planned deployment of an AI system that could plausibly lead to harm in the future, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and potential harm.
Thumbnail Image

Neuer Schub fürs Robo-Taxi: Elektro-Zweisitzer von Rimac soll die Straßen erobern

2024-06-29
24auto.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving software and sensor fusion) in the development and intended use of robo-taxis. However, there is no indication of any harm, malfunction, or violation caused by the AI system at this stage. The article focuses on the launch and vision of the service, which could plausibly lead to future AI-related impacts but does not describe any current incident or hazard. Therefore, it is best classified as Complementary Information, providing context and updates on AI deployment in autonomous vehicles without reporting harm or credible risk of harm yet.
Thumbnail Image

Fabricante de Bugatti presenta taxi autónomo "más espacioso que un Rolls-Royce"

2024-06-27
Gestión
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (autonomous driving technology) but does not describe any realized harm or incident resulting from its use. The article discusses the potential benefits and deployment plans but does not mention any accident, malfunction, or violation of rights. Therefore, it represents a plausible future use of AI with potential risks but no current harm. This fits the definition of an AI Hazard, as the autonomous taxi could plausibly lead to incidents in the future once deployed, but no incident has yet occurred.
Thumbnail Image

Rimac Automibili lanzará en 2026 en Croacia su sistema de robotaxis eléctricos y autónomos

2024-06-26
infobae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology) in development and planned use, but no harm or malfunction has occurred yet. The article discusses the intended launch and capabilities of the robotaxi service, which could plausibly lead to future AI-related incidents or hazards once deployed, but currently, it is a planned project without realized harm. Therefore, it qualifies as an AI Hazard due to the plausible future risks associated with autonomous vehicles, but not an AI Incident or Complementary Information.
Thumbnail Image

Cómo serán los increíbles taxis robots autónomos que homenajean a Julio Verne y comenzarán a funcionar en Croacia en 2026

2024-06-27
infobae
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI for robotaxis. However, it does not describe any harm or incident resulting from the AI system's development or use. Instead, it presents a future deployment plan and design details, which could plausibly lead to future AI incidents or hazards but currently do not constitute an incident or hazard. The article is primarily informative about the AI ecosystem and upcoming service, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Google, Amazon, Tesla...y ahora Bugatti: las grandes firmas apuestan por el 'robotaxi'

2024-06-29
20 minutos
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI for robotaxis. It describes the development and planned use of these AI systems in real-world urban environments. Although no actual harm or incident is reported, the nature of these AI systems and their deployment in public spaces plausibly could lead to AI incidents such as accidents or safety issues. Hence, this qualifies as an AI Hazard due to the credible risk of future harm from the use of these autonomous AI systems. There is no indication of realized harm or incident, nor is the article primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information. It is not unrelated as it clearly concerns AI systems and their societal impact.
Thumbnail Image

Rimac Automobili estrenará en Croacia sus robotaxis autónomos en...

2024-06-26
europa press
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (autonomous vehicles) that could plausibly lead to future harms such as accidents or disruptions, but no actual harm or incident has occurred yet. Therefore, it qualifies as an AI Hazard due to the plausible future risks associated with autonomous robotaxis, but not an AI Incident or Complementary Information.
Thumbnail Image

Startup croata presenta 'Verne', su robotaxi autónomo, listo para operar en 2026

2024-06-26
Forbes México
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving software) in development and planned use, but no realized harm or incident is described. The article discusses future deployment and contracts but does not report any accident, malfunction, or rights violation. Therefore, it fits the definition of an AI Hazard, as the autonomous robotaxi could plausibly lead to harm once operational, but no incident has yet occurred.
Thumbnail Image

Rimac Verne, el nuevo ecosistema de transporte autónomo | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2024-06-30
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a level 4 autonomous vehicle platform with AI perception and decision-making capabilities. The system is not yet deployed, with robotaxis expected to start operating in 2026. No harm or malfunction is reported, nor any incident of misuse or failure. Given the nature of autonomous vehicles, there is a plausible risk of future harm (e.g., accidents, safety issues) once deployed. However, since no harm has yet occurred, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the announcement and description of the system, not on responses or updates to prior incidents, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Bugatti-Rimac, una de las marcas más punteras, anuncia 'Verne', su servicio de Robotaxi para competir con Tesla - Híbridos y Eléctricos

2024-06-29
Híbridos y Eléctricos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it concerns autonomous vehicles (robotaxis) that rely on AI for navigation and operation. The announcement of a new autonomous taxi service indicates the development and intended use of AI systems. However, there is no indication of any harm or malfunction caused by these AI systems at this stage. The article is primarily about the launch and future plans, which fits the definition of Complementary Information as it provides context and updates on AI developments without describing an incident or hazard. Therefore, the classification is Complementary Information.
Thumbnail Image

Una startup croata prepara el taxi autoconducido Verne para su lanzamiento en Zagreb

2024-06-26
MarketScreener
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving software) in development and planned use, but no actual harm or incident has occurred. The article focuses on the launch and deployment plans, funding, and partnerships, without mentioning any realized harm or malfunction. Therefore, it fits the definition of an AI Hazard, as the autonomous taxi system could plausibly lead to harm in the future, but no incident has yet occurred.