Security Flaw in Chinese-Made Electric Buses Raises Remote Control Risks in Norway

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Oslo's public transport operator Ruter discovered that Chinese-made electric buses have a software vulnerability allowing potential remote control by manufacturers or hackers. The flaw, involving SIM card-enabled remote updates, poses risks of unauthorized access. Norwegian authorities are investigating, and Ruter is developing digital firewalls to mitigate the threat.[AI generated]

Why's our monitor labelling this an incident or hazard?

The buses are AI-enabled systems or at least software-controlled vehicles with network connectivity allowing remote software updates and control, which reasonably infers the presence of AI or advanced automated control systems. The vulnerability could plausibly lead to harm such as injury to people or disruption of critical infrastructure if exploited. Although no harm has yet occurred, the credible risk of remote hijacking of public transport vehicles constitutes an AI Hazard under the framework, as the event highlights a plausible future harm scenario stemming from the AI system's use and potential malfunction or exploitation.[AI generated]
Industries
Mobility and autonomous vehiclesDigital security

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

挪威揭中國製電動巴士安全漏洞 恐遭遠端控制

2025-10-29
Yahoo News
Why's our monitor labelling this an incident or hazard?
The buses are AI-enabled systems or at least software-controlled vehicles with network connectivity allowing remote software updates and control, which reasonably infers the presence of AI or advanced automated control systems. The vulnerability could plausibly lead to harm such as injury to people or disruption of critical infrastructure if exploited. Although no harm has yet occurred, the credible risk of remote hijacking of public transport vehicles constitutes an AI Hazard under the framework, as the event highlights a plausible future harm scenario stemming from the AI system's use and potential malfunction or exploitation.
Thumbnail Image

中國製電動巴士爆安全漏洞「可遠端控制」 挪威政府要查 - 國際 - 自由時報電子報

2025-10-29
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The buses' software enabling remote control is an AI system or automated system that can influence physical environments. The vulnerability allows potential malicious remote control, which could plausibly lead to harm to people or disruption of critical infrastructure. No actual harm has been reported yet, but the risk is credible and significant, meeting the definition of an AI Hazard. The event does not describe realized harm, so it is not an AI Incident. It is more than complementary information because it reports a concrete security vulnerability with plausible future harm.
Thumbnail Image

挪威揭中國製電動巴士安全漏洞 恐遭遠端控制 | 國際 | 中央社 CNA

2025-10-29
Central News Agency
Why's our monitor labelling this an incident or hazard?
The buses are equipped with software that enables remote control, which is an AI-related system feature involving networked software updates and control. The vulnerability could plausibly lead to an AI Incident if exploited, causing harm to people or disruption of critical infrastructure. Since no harm has yet occurred but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The article also mentions ongoing efforts to develop digital firewalls and government assessment, which are responses but do not change the classification.
Thumbnail Image

挪威揭中國製電動巴士安全漏洞,可能遭遠端控制

2025-10-29
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The buses' software enabling remote control constitutes an AI system or automated system managing vehicle operations. The identified security flaw could plausibly lead to an AI Incident by allowing malicious remote control, which could cause harm to people or disrupt critical infrastructure. Since no actual harm has been reported yet, but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risk and ongoing mitigation efforts, not on realized harm.
Thumbnail Image

Mit Karte aus Rumänien - Norweger entdecken plötzlich, dass China 850 ihrer Elektrobusse fernsteuern und stoppen kann

2025-11-01
Yahoo!
Why's our monitor labelling this an incident or hazard?
The buses are AI systems or at least AI-enabled systems with remote control and diagnostic capabilities. The event reveals that the manufacturer can remotely control and stop the buses, which directly risks passenger safety and disrupts critical infrastructure (public transport). This is a direct harm caused by the use and potential misuse of the AI system's remote control features. Therefore, this qualifies as an AI Incident due to direct harm and disruption caused by the AI system's use and vulnerabilities.
Thumbnail Image

Norweger entdecken plötzlich, dass China 850 ihrer Elektrobusse fernsteuern und stoppen kann

2025-11-01
Focus
Why's our monitor labelling this an incident or hazard?
The buses are equipped with AI systems that allow remote monitoring and control, which has been demonstrated by a secret test revealing the Chinese manufacturer can fully control and stop the buses remotely. This constitutes a direct AI Incident because the AI system's use (remote control) has directly led to a significant safety and security risk, potentially causing harm to people or disruption of critical infrastructure. The event is not merely a potential hazard but a realized incident of AI system misuse or vulnerability.
Thumbnail Image

Norweger kaufen E-Busse - und merken dann, dass China diese fernsteuern kann

2025-11-03
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The buses are equipped with AI or automated remote control systems that allow remote operation, which is a clear AI system involvement. The event stems from the use and potential misuse of these AI systems. While no actual harm has been reported, the ability to remotely stop or lock buses presents a credible risk of injury or disruption, meeting the criteria for an AI Hazard. The event does not describe realized harm, so it is not an AI Incident. It is more than complementary information because it reveals a significant security vulnerability with potential for harm.
Thumbnail Image

Norweger entsetzt: China-Hersteller hat Fernkontrolle über E-Busse

2025-11-03
WinFuture.de
Why's our monitor labelling this an incident or hazard?
The buses' software and control systems involve AI or AI-like autonomous control capabilities, as inferred from the ability to remotely control, update, or disable the buses. The event involves the use and potential misuse of these AI systems leading to a direct risk of disruption to critical infrastructure (public transport). The harm is not hypothetical but demonstrated by the tests revealing the vulnerability, and the potential for harm is significant. The event is not merely a warning or potential risk (hazard), but a confirmed security incident involving AI systems. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Norwegischer Verkehrsbetrieb warnt vor Sicherheitsrisiko durch chinesische Elektrobusse

2025-11-03
rnd.de
Why's our monitor labelling this an incident or hazard?
The buses' OTA update feature implies the use of AI systems for diagnostics and control, which can be remotely accessed. The warning from the transit operator about the potential for remote control misuse indicates a credible risk of disruption to critical infrastructure (public transport). Since no actual harm has been reported yet, but the risk is plausible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The operator's plan to implement update review mechanisms is a response to this hazard but does not change the classification of the event itself.
Thumbnail Image

Norwegische E-Busse können aus China ferngesteuert werden

2025-11-03
Nau
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the buses can be remotely controlled or disabled due to security flaws, which directly threatens the safety and operation of public transportation, a critical infrastructure. The buses' control systems and diagnostics likely involve AI or AI-enabled components, given their complexity and remote management features. The harm is direct and significant, involving potential injury or disruption. Hence, this is an AI Incident rather than a mere hazard or complementary information. The event is not unrelated because it concerns AI-enabled systems and their security vulnerabilities leading to harm.
Thumbnail Image

Norveçli Şirket Ruter'in Elektrikli Otobüs Testi Krizi | Haber Aktüel

2025-11-02
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The electric buses' remote control and software update capabilities imply the use of AI or advanced automated systems managing vehicle functions. The concern about these systems being exploited to disable vehicles remotely indicates a credible risk of harm to passengers, operators, or public safety. Since no actual disabling incident or harm has been reported yet, but the potential for such harm is clearly articulated and plausible, the event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses security measures and regulatory responses, but the main focus is on the potential risk rather than a realized harm.
Thumbnail Image

Norveçli Ruter Yutong Otobüslerini Test Etti | Haber Aktüel

2025-11-02
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or automated software systems in electric buses that can be remotely controlled and updated. Experts warn that malicious exploitation of these capabilities could disable the buses, posing a risk to public safety and transportation infrastructure. No actual harm has been reported yet, but the plausible future harm from such a security vulnerability is credible and significant. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Norwegen schockiert: China kann 850 im Land eingesetzte Elektro-Busse fernsteuern

2025-11-03
newstime.joyn.de
Why's our monitor labelling this an incident or hazard?
The buses are equipped with remote control capabilities, which likely involve AI or automated systems for navigation or operation. The remote control from China over buses operating in Norway introduces a credible risk of harm to public safety and infrastructure management if the system is misused or malfunctions. Since no actual harm is reported yet, but the potential for significant harm is clear, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Avrupa'da "Çin otobüsü" krizi: "Uzaktan kontrol edilebiliyorlar"

2025-11-02
F5Haber
Why's our monitor labelling this an incident or hazard?
The event involves AI systems embedded in electric buses that can be remotely controlled and updated, which is a clear AI system involvement. Although no direct harm has been reported, experts warn that malicious use of these remote control capabilities could disable buses, posing risks to public safety and critical infrastructure. This potential for harm qualifies the event as an AI Hazard rather than an Incident, as the harm is plausible but has not yet materialized. The article also discusses governance and security responses, but the main focus is on the potential risk from AI system vulnerabilities.
Thumbnail Image

Danish authorities in rush to close security loophole in Chinese electric buses

2025-11-05
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems or AI-enabled software controlling electric buses, which can be remotely accessed and manipulated. While no direct harm has occurred yet, the potential for remote deactivation of buses represents a plausible risk of disruption to critical infrastructure (public transport). Therefore, this qualifies as an AI Hazard because the AI system's use or malfunction could plausibly lead to an AI Incident involving disruption of critical infrastructure. The article focuses on the investigation and risk mitigation rather than an actual incident, so it is not an AI Incident. It is more than complementary information because it highlights a credible security risk with potential harm.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve AI or AI-like capabilities for remote software updates and diagnostics, which can be reasonably inferred as AI systems managing vehicle functions. The test results show that the manufacturer has direct digital access, which could be exploited maliciously. No actual harm or incident has been reported yet, but the potential for harm (e.g., remote halting causing accidents or service disruption) is credible. Hence, this is an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it highlights a specific plausible risk from AI system use in public transport vehicles.
Thumbnail Image

Norway Transport Firm Steps up Controls After Tests Show Chinese-Made Buses Can Be Halted Remotely

2025-11-05
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The buses' control systems allow remote software updates and diagnostics, which involve AI or advanced automated systems managing vehicle operations. The ability for the manufacturer to remotely stop buses represents a direct control capability that, if misused or hacked, could lead to harm such as injury to passengers or disruption of critical public transport infrastructure. Although no incident has occurred yet, the described scenario plausibly could lead to an AI Incident. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk of harm stemming from the AI system's use and potential malfunction or misuse.
Thumbnail Image

Norway transport firm tightens security after Chinese buses found hackable

2025-11-05
Business Standard
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve AI or advanced automated software capable of remote updates and diagnostics, which is a form of AI system involvement. The event does not report any actual harm or incident caused by hacking or misuse but highlights a credible risk that the manufacturer or a malicious actor could remotely disable buses, potentially causing harm or disruption. The transport operator's response to tighten security and implement firewalls confirms the recognition of this plausible risk. Since no realized harm has occurred yet, but the risk is credible and significant, the event is best classified as an AI Hazard.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
Market Beat
Why's our monitor labelling this an incident or hazard?
The buses have AI-related systems that allow remote software updates and diagnostics, which is a form of AI system involvement. The manufacturer’s ability to remotely stop the buses represents a potential risk that could lead to harm such as injury or disruption of critical infrastructure (public transport). However, the article does not report any actual harm or incident occurring yet, only the plausible risk and the operator's response to mitigate it. Hence, this is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Chinesische E-Busse in Norwegen: Ferngesteuert durch eine SIM-Card?

2025-11-04
taz.de
Why's our monitor labelling this an incident or hazard?
The buses include a SIM card box enabling remote digital access, which can be reasonably inferred to involve AI or automated control systems for software updates and potentially other functions. The theoretical ability to stop buses remotely during operation presents a credible risk of harm to passengers and public safety. No actual incident of harm has been reported, so it is not an AI Incident. The article focuses on the potential risks and calls for stricter cybersecurity measures and regulations, fitting the definition of an AI Hazard. It is not merely complementary information because the main focus is on the plausible future harm from the AI system's capabilities.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-06
Newsday
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve AI-enabled software updates and diagnostics accessible remotely by the manufacturer, which qualifies as an AI system. The event concerns the use and potential misuse of this AI system's remote access capabilities. While no incidents of buses being maliciously stopped have occurred, the theoretical possibility of such exploitation poses a credible risk to public safety and transport operations. The operator's response to strengthen cybersecurity confirms recognition of this plausible future harm. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

E-Busse: Alarmierendes Ergebnis bei Test in Norwegen

2025-11-04
Express.de
Why's our monitor labelling this an incident or hazard?
The buses are controlled remotely via electronic SIM cards and can be manipulated by the manufacturer, indicating the presence of AI or automated control systems. The event involves the use and potential misuse of these AI-enabled systems, which could directly lead to harm such as injury to passengers or disruption of public transport infrastructure. Since the test revealed these vulnerabilities but no actual harm has yet occurred, this qualifies as an AI Hazard due to the plausible risk of harm from remote control capabilities.
Thumbnail Image

Denmark moves to close security gap in Chinese-made electric buses

2025-11-05
Tribune Online
Why's our monitor labelling this an incident or hazard?
The buses have remote access capabilities for software updates and diagnostics, which implies the use of AI or advanced automated systems managing vehicle operations. The security loophole allowing remote deactivation represents a plausible risk of disruption to critical infrastructure (public transport) and potential harm to passengers. Since no actual harm has been reported yet, but credible risks exist and authorities are investigating and taking measures, this event fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI systems is reasonably inferred from the description of software control and diagnostics capabilities.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
WHAS 11 Louisville
Why's our monitor labelling this an incident or hazard?
The buses' control systems use AI or advanced software for remote diagnostics and updates, which can be inferred as AI systems due to their autonomous decision-making and control capabilities. The manufacturer’s ability to remotely stop or disable buses presents a credible risk of harm, including disruption of critical infrastructure and potential safety hazards. No actual harm or incident has been reported yet, but the plausible risk of such harm justifies classification as an AI Hazard. The article focuses on the potential risks and the operator's response to mitigate these risks, not on an actual incident of harm caused by AI malfunction or misuse.
Thumbnail Image

Denmark races to fix security flaw after discovering China can remotely override electric buses

2025-11-05
GameReactor
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve AI or advanced algorithmic components for remote diagnostics and software updates, which can be remotely accessed and potentially exploited. While no actual harm or incident has been reported, the possibility of remote deactivation or interference with buses in motion poses a credible risk of disruption to critical infrastructure (public transport). This fits the definition of an AI Hazard, as the event involves a plausible future harm stemming from the AI system's use or malfunction. The event does not describe realized harm, so it is not an AI Incident. It is more than complementary information because it highlights a credible security risk with potential for harm.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
Winnipeg Sun
Why's our monitor labelling this an incident or hazard?
The buses are operated by humans but have control systems accessible remotely, which implies the presence of AI or automated systems managing battery and power supply. The ability for the manufacturer to remotely stop or disable buses via mobile networks presents a plausible risk of disruption to critical infrastructure (public transport). No actual harm or incident has occurred yet, but the potential for such harm is credible. The company's response to strengthen security and implement firewalls confirms awareness of this risk. Hence, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

E-Busse: Alarmierendes Ergebnis bei Test in Norwegen

2025-11-05
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The buses are equipped with AI or automated control systems that allow remote monitoring and control, which is a clear AI system involvement. The event reports that the manufacturer can remotely shut down buses and control doors, which directly threatens passenger safety and public transport operation. Although no harm has yet occurred, the potential for harm is significant and credible, as unauthorized remote control could cause injury or disruption. This fits the definition of an AI Hazard because the harm is plausible but not yet realized. However, since the event reveals an actual vulnerability with a credible risk of harm to people and infrastructure, it qualifies as an AI Hazard rather than an AI Incident, as no actual harm has been reported yet.
Thumbnail Image

Norwegen deckt bedrohliche Fernsteuerung auf

2025-11-04
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The article explicitly reports that the buses can be remotely controlled due to cybersecurity vulnerabilities, which directly threatens the safety of passengers and the public. The buses' control systems likely involve AI components for autonomous or semi-autonomous operation, making this an AI system issue. The harm is realized or imminent, as unauthorized remote control can cause injury or disruption. Therefore, this is an AI Incident rather than a hazard or complementary information. The event is not unrelated because it clearly involves AI-enabled systems and their malfunction or misuse leading to harm.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The buses' control systems, which include software updates and diagnostics, imply the use of AI or automated systems for vehicle management. The manufacturer's ability to remotely turn off buses represents a potential security vulnerability that could lead to disruption of critical infrastructure (public transport) or safety risks. Since the event describes a plausible risk without actual harm yet, it fits the definition of an AI Hazard rather than an AI Incident. The transport operator's planned security measures further support the recognition of this as a hazard scenario.
Thumbnail Image

Norway transport firm steps up controls after tests show Chinese-made buses can be halted remotely

2025-11-05
2 News Nevada
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred here because the buses have software systems capable of over-the-air updates and remote diagnostics, which typically involve AI or advanced algorithmic control for vehicle management. The manufacturer's ability to remotely access and potentially stop the buses indicates a control system that could be AI-enabled or at least automated with sophisticated software. However, no actual harm or incident has occurred; the concern is about the potential for misuse or hacking leading to harm. Therefore, this qualifies as an AI Hazard, as the event plausibly could lead to an AI Incident if the remote control capability were exploited maliciously or malfunctioned, causing injury or disruption to critical infrastructure (public transport).
Thumbnail Image

中國製電動公車存在安全漏洞 挪威、丹麥出手調查 - 國際 - 自由時報電子報

2025-11-07
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes security vulnerabilities in the software and network-connected subsystems of Chinese-made electric buses, which can be remotely accessed and potentially manipulated. These buses use advanced software systems that likely incorporate AI for vehicle control, diagnostics, and sensor data processing. The vulnerabilities could plausibly lead to harm by disrupting public transportation operations or causing safety incidents. Since no actual harm has been reported yet, but the risk is credible and recognized by authorities, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the identified security risks and ongoing investigations, not on responses or broader ecosystem context. It is not unrelated because the event involves AI-enabled systems with potential safety implications.
Thumbnail Image

Autobuses chinos pueden pararse a distancia en Noruega, más controles

2025-11-06
Euronews Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the manufacturer can remotely access the buses' control systems for software updates and diagnostics, which involves AI or automated software systems managing vehicle functions. While no actual incident of harm or bus stoppage has occurred, the theoretical possibility that the manufacturer or an attacker could remotely disable buses constitutes a credible risk of harm to public transportation operations and passenger safety. The operator's response to impose stricter security measures confirms recognition of this plausible threat. Since no realized harm is reported, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Empresa de transporte noruega encuentra vulnerabilidades de seguridad en autobuses hechos en China

2025-11-05
Chicago Tribune
Why's our monitor labelling this an incident or hazard?
The buses incorporate AI systems for remote software updates and diagnostics, which the manufacturer can access digitally. The potential for remote disabling of buses represents a plausible risk of harm to public safety and transportation infrastructure. No actual harm or incident has been reported yet, but the credible risk of malicious or accidental misuse of this AI-enabled remote control capability justifies classification as an AI Hazard. The event involves the use of AI systems, concerns cybersecurity vulnerabilities, and the operator's response to mitigate risks, but no realized harm has occurred.
Thumbnail Image

中國製電動公車可被遠程操控 挪威丹麥調查 | 宇通集團 | 新唐人电视台

2025-11-07
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
An AI system can be reasonably inferred here because the buses have remote software update and diagnostic systems that likely include AI components for vehicle maintenance, optimization, and operation. The ability to remotely control or influence bus operations via software updates implies AI involvement in decision-making or system management. Although no direct harm has occurred, the potential for unauthorized remote control leading to disruption of critical infrastructure (public transportation) is a credible risk. Therefore, this event qualifies as an AI Hazard, as it plausibly could lead to an AI Incident if exploited.
Thumbnail Image

中國電動巴士存嚴重安全漏洞 挪威丹麥敲警鐘 | 遠程操控 | SIM卡 | 遠端控制巴士 | 新唐人电视台

2025-11-08
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the buses have SIM cards enabling remote software updates and control over critical systems like battery and power management, which implies AI or AI-like systems managing these functions. The ability to remotely control or disrupt bus operations constitutes a direct safety hazard. The reported mechanical failures and a documented crash caused by unresponsive steering further demonstrate realized harm linked to these AI-enabled systems or their components. The security vulnerabilities and actual accidents fulfill the criteria for an AI Incident, as the AI system's use and malfunction have directly led to harm to persons and property. The article does not merely warn of potential harm but reports actual incidents and ongoing risks, prioritizing the classification as an AI Incident over a hazard or complementary information.
Thumbnail Image

Noruega refuerza controles tras pruebas que revelan riesgos en buses chinos

2025-11-05
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The buses incorporate AI-related systems for remote diagnostics and software updates, which can influence vehicle operation. The manufacturer's ability to remotely stop buses presents a credible risk of disruption or harm if exploited maliciously or accidentally. However, the article states that no hacking or remote control incidents have occurred, and the buses are not autonomous. The focus is on potential risks and preventive measures, fitting the definition of an AI Hazard. The event does not describe actual harm or rights violations, so it is not an AI Incident. It is also not merely complementary information, as the main focus is on the risk revealed by the tests and the security implications.
Thumbnail Image

挪威、丹麥揪中國電動巴士安全疑慮 可遠端停止車輛運作

2025-11-07
公共電視
Why's our monitor labelling this an incident or hazard?
The buses incorporate AI systems for remote software updates and diagnostics, which can be remotely accessed by the manufacturer. The event involves the use of AI systems and highlights a credible risk that these systems could be misused to stop buses remotely, potentially disrupting critical infrastructure and endangering public safety. Since no actual harm or incident has occurred yet, but the risk is credible and plausible, this fits the definition of an AI Hazard. The article focuses on the potential for harm and the security concerns rather than a realized incident.
Thumbnail Image

Singular alerta lanzan en Europa: Buses fabricados en China pueden ser detenidos o inutilizados de forma remota ¿qué pasaría en Bogotá?

2025-11-07
Noticias Principales de Colombia Radio Santa Fe 1070 am
Why's our monitor labelling this an incident or hazard?
The buses incorporate AI or automated control systems with remote update capabilities, which can be reasonably inferred as AI systems managing vehicle functions. The reported vulnerabilities allow the manufacturer or potentially malicious actors to remotely control or disable buses, posing a credible risk of harm or disruption. Since no actual incident has been reported yet, but the risk is plausible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential risks and ongoing security evaluations, not on realized harm.
Thumbnail Image

挪威揭「中國電巴可被遠端操控」 丹麥、澳洲等國憂資安風險 - 民視新聞網

2025-11-07
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions remote access to vehicle control systems by the manufacturer, which implies the presence of AI or AI-enabled systems managing vehicle operations. The ability to remotely shut down buses could plausibly lead to harm to passengers or disruption of critical infrastructure, meeting the criteria for an AI Hazard. There is no direct report of harm occurring yet, only concerns and investigations, so it does not qualify as an AI Incident. The event is more than general AI-related news, as it highlights a credible risk of harm due to AI system use in critical public transport vehicles.
Thumbnail Image

繼挪威、丹麥之後 英國也對中國製電動巴士展開調查 - 國際 - 自由時報電子報

2025-11-09
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI-related systems because the buses have software capable of remote updates and diagnostics, which implies AI or advanced algorithmic control for vehicle management. The investigations are prompted by the discovery of security vulnerabilities that could allow remote manipulation of the buses, posing a plausible risk of harm such as disruption of critical infrastructure (public transport) or safety hazards. However, no actual harm or incident has occurred yet; the article focuses on the potential risk and ongoing investigations. Thus, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

英國政府正檢查中國製電動巴士會否被遙距控制 | am730

2025-11-10
am730
Why's our monitor labelling this an incident or hazard?
The buses' remote control capability implies the presence of an AI or automated control system that can influence physical environments (vehicle operation). The UK government's investigation, alongside the National Cyber Security Centre, indicates concern about potential misuse or malfunction that could disrupt critical infrastructure. Since no harm has yet occurred in the UK but there is a credible risk of disruption, this event qualifies as an AI Hazard rather than an AI Incident. The event does not report realized harm but highlights a plausible future risk from the AI system's use or malfunction.
Thumbnail Image

中國產的公車可被「一鍵停駛」?測試引發電動車安全討論

2025-11-06
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The buses' remote software update and diagnostic capabilities imply the use of AI or advanced automated systems managing vehicle operations. The potential for remote disabling represents a credible risk of harm (e.g., disruption of critical infrastructure like public transport) if misused or hacked, fitting the definition of an AI Hazard. Since no incident of actual harm or unauthorized control has occurred, and the article focuses on the test results and mitigation efforts, this is not an AI Incident or Complementary Information but an AI Hazard.
Thumbnail Image

挪威新電動巴士 中國可遠端操控 - 國際 - 自由時報電子報

2025-11-06
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions remote access to control systems of electric buses, which likely involve AI or advanced software for diagnostics and control. The potential for remote disabling or loss of operation capability poses a credible risk of harm to public transport infrastructure and passenger safety. Since no actual harm has occurred yet but the risk is plausible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but highlights a credible future risk from AI system use.
Thumbnail Image

中國產電動巴士可被遠程控制 挪威加強安全管制 | 魯特 | 大紀元

2025-11-06
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The buses are AI-enabled systems with remote connectivity allowing software updates and diagnostics, which implies AI system involvement in vehicle control and maintenance. The ability to remotely control or stop buses could plausibly lead to harm such as disruption of critical infrastructure or injury if misused or hacked. Since no actual harm has occurred yet but a credible risk exists, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential security risks and planned mitigations rather than reporting an actual incident of harm.
Thumbnail Image

中製電動巴士存資安疑慮 挪威運輸業者強化防護措施 | 國際 | 中央社 CNA

2025-11-06
Central News Agency
Why's our monitor labelling this an incident or hazard?
The buses' control systems are AI-enabled or at least involve sophisticated automated control and remote software updates, which fits the definition of an AI system. The event describes the use of these AI systems and the potential for malicious remote access that could disrupt bus operations or cause safety issues. Since no actual harm has occurred yet but there is a credible risk of harm due to cybersecurity vulnerabilities, this is best classified as an AI Hazard. The article focuses on the potential risks and the transport operator's plans to strengthen cybersecurity measures, indicating a plausible future harm scenario rather than a realized incident.
Thumbnail Image

華製電動巴可遙控 挪威公司收緊採購標準

2025-11-07
on.cc東網
Why's our monitor labelling this an incident or hazard?
The buses' remote control capability involves an AI system or at least an advanced automated control system that can remotely update software and diagnose faults, which can directly impact the operation and safety of the buses. The potential for forced stopping or disabling the buses constitutes a direct risk to public safety and critical infrastructure operation. Although no harm has yet occurred, the event reveals a plausible risk of harm due to the AI system's use and control capabilities. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (e.g., disruption of critical infrastructure or injury).
Thumbnail Image

中製電動巴士存資安疑慮,挪威運輸業者強化防護措施

2025-11-06
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The buses' control systems likely involve AI or advanced automated software for diagnostics, updates, and operational control, which can be reasonably inferred as AI systems due to their complexity and remote management capabilities. Although no harm has occurred, the potential for remote interference with bus operation poses a credible risk of harm to public safety and transport infrastructure. Therefore, this situation qualifies as an AI Hazard because the AI system's use could plausibly lead to an incident involving disruption or harm, but no incident has yet materialized.
Thumbnail Image

Norvegia, autobus cinesi bloccabili a distanza: scattano i controlli

2025-11-06
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the bus manufacturer has remote digital access to vehicle control systems for software updates and diagnostics, which could theoretically be exploited to intervene on the buses, including stopping them remotely. While no actual incident of harm has occurred, the potential for such misuse poses a credible risk to public safety and critical infrastructure (public transport). The buses are equipped with advanced electronic systems that likely include AI components for diagnostics and control. The event is thus best classified as an AI Hazard, as it plausibly could lead to an AI Incident if malicious remote control or cyberattacks occur. The article also notes that measures are being taken to strengthen cybersecurity, indicating awareness of this potential hazard. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential risk revealed by the tests, not on responses or broader ecosystem context. It is not unrelated because the event involves AI-enabled systems and plausible harm.
Thumbnail Image

La Norvegia ha scoperto che gli autobus Made in China possono essere fermati da remoto

2025-11-07
Money.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the ability to remotely control and stop buses, which are AI-enabled electric vehicles relying on software updates and diagnostics. The remote access capability implies AI or advanced automated systems managing vehicle functions. While no actual harm has been reported, the potential for malicious or unintended remote control leading to disruption of critical public transport infrastructure is a credible risk. The event is thus best classified as an AI Hazard, reflecting plausible future harm from AI system use or malfunction. It is not an AI Incident because no realized harm has occurred yet, and it is not Complementary Information or Unrelated because the focus is on a specific AI-related risk with potential significant impact.
Thumbnail Image

La conferma che l'Europa deve ridurre la dipendenza tecnologica dalla Cina arriva da un autobus elettrico

2025-11-06
Wired
Why's our monitor labelling this an incident or hazard?
The buses' control systems, including remote software updates and diagnostics, involve AI or AI-like systems managing vehicle operations. The ability to remotely disable buses could disrupt critical public transport infrastructure, posing risks to public safety and operational continuity. Although no incidents of harm have been reported, the vulnerability and potential for misuse constitute a credible risk of harm. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to an AI Incident involving disruption of critical infrastructure or harm to people if exploited.
Thumbnail Image

Bus cinesi in uso in Norvegia possono essere fermati a distanza dal produttore

2025-11-07
Aduc
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve AI or AI-like capabilities for remote diagnostics and updates, which can influence vehicle operation. The manufacturer's ability to remotely stop or disable buses, even if not currently exploited maliciously, represents a credible risk of harm to public transportation infrastructure and passenger safety. The event does not report actual harm or incidents but highlights a plausible future risk of harm due to the AI system's capabilities and vulnerabilities. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article also discusses responses and security measures, but the main focus is on the potential risk rather than a realized incident.
Thumbnail Image

Allarme sui bus elettrici cinesi: centinaia di veicoli possono essere spenti da remoto. La Danimarca corre ai ripari

2025-11-06
Greenmove
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions remote access to control systems of electric buses, which likely involve AI-based software for vehicle management and diagnostics. The possibility of remote disabling during operation poses a credible risk of harm to passengers and public safety, meeting the criteria for an AI Hazard. Since no actual harm or incident has occurred yet, and the authorities are investigating and taking preventive measures, this event does not qualify as an AI Incident. It is not merely complementary information because the main focus is on the potential risk and vulnerability, not on responses to a past incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Súlyos problémát találtak a kínai buszoknál: távolról le lehet őket állítani, akár menet közben

2025-11-05
Portfolio.hu
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve remote software access, which likely includes AI or advanced automated control components for diagnostics and updates. The ability to remotely shut down buses while in operation poses a credible risk of harm to people and disruption of critical infrastructure. However, the article states that no known incidents of remote shutdown causing harm have occurred so far, and the authorities are investigating and planning mitigation measures. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the AI-related vulnerability.
Thumbnail Image

Óriási a baj: ezeket a kínai buszokat távolról, menet közben is lehet irányítani

2025-11-06
divany.hu
Why's our monitor labelling this an incident or hazard?
The buses' remote control capability implies the presence of AI or AI-enabled control systems managing vehicle operation. The security vulnerability allowing remote manipulation of these systems while the bus is in motion directly threatens passenger safety and public health, fitting the definition of harm to people. The event involves the use and malfunction (security breach) of AI systems, leading to or plausibly leading to harm. The authorities' response and investigations confirm the seriousness of the issue. Therefore, this is classified as an AI Incident rather than a hazard or complementary information, as harm or risk of harm is already realized and under active scrutiny.
Thumbnail Image

Dagad a kínai buszbotrány Európában: bármikor leállíthatja távolról a járműveket a gyártó? - Pénzcentrum

2025-11-05
Pénzcentrum
Why's our monitor labelling this an incident or hazard?
The article details a security flaw in AI-enabled electric buses that could allow remote control and stopping of vehicles, posing a credible risk to passenger safety and public transport operations. While no incident of harm has yet occurred, the potential for such harm is clear and under active investigation. The AI system's involvement is inferred from the description of remote diagnostics, software updates, and sensor-equipped subsystems, which are typical AI applications in modern vehicles. Since the harm is plausible but not realized, this is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Távolról is leállíthatja a kínai gyártó az elektromos buszait Európában - Dániában gőzerővel keresik a megoldást | szmo.hu

2025-11-06
szeretlekmagyarorszag.hu
Why's our monitor labelling this an incident or hazard?
The buses' control systems involve software that likely includes AI components for diagnostics and remote management. The ability to remotely shut down buses via software access represents a malfunction or misuse risk of these AI systems. Since no actual harm or shutdown incidents have been reported, but the vulnerability could plausibly lead to significant harm (disruption of public transport, safety risks), this fits the definition of an AI Hazard. The event is not merely general AI news or complementary information because it highlights a credible security risk with potential for harm. It is not an AI Incident because harm has not yet occurred.
Thumbnail Image

中国产电动巴士可被远程控制 挪威加强安全管制 | 鲁特 | 大纪元

2025-11-06
The Epoch Times
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred because the buses have remote connectivity for software updates and diagnostics, which likely involves AI or automated systems managing vehicle functions. The event involves the use of AI-enabled remote control systems in the buses. While no direct harm has occurred, the possibility that unauthorized remote control could disrupt bus operations or cause safety issues constitutes a plausible future harm. Therefore, this event qualifies as an AI Hazard. The article focuses on the potential risks and the operator's planned security responses rather than reporting an actual incident of harm, so it is not an AI Incident or Complementary Information. It is not unrelated because the remote control system is AI-related and relevant to safety risks.
Thumbnail Image

英国调查中国制造电动巴士是否可被远程停驶 - FT中文网

2025-11-10
英国金融时报中文版
Why's our monitor labelling this an incident or hazard?
The buses' control systems likely incorporate AI or advanced automated systems enabling remote software updates and diagnostics, which can be reasonably inferred as AI systems. The concern is about the potential misuse or unauthorized remote control leading to disruption of critical infrastructure (public transportation). Since no actual harm has occurred yet, but credible investigations and concerns exist about the possibility of remote disabling, this fits the definition of an AI Hazard. The event does not describe a realized incident but a plausible future risk stemming from AI system capabilities in the buses.
Thumbnail Image

英国魔愣调查中国电动公交车 无端指控再起波澜

2025-11-10
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system or AI-enabled system (electric buses with software capable of remote updates and diagnostics, implying AI or advanced software control). The UK government and cybersecurity center are investigating the potential for remote control that could lead to harm (e.g., remote shutdown causing accidents or disruption). Since no actual harm or incident has been reported, but there is a plausible risk of harm if the buses can be remotely controlled maliciously, this qualifies as an AI Hazard. The article does not report any realized harm or incident, only the potential risk and ongoing investigation.
Thumbnail Image

中国产电动公交车可被"一键停驶"?引发欧洲技术安全讨论 - cnBeta.COM 移动版

2025-11-10
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems embedded in electric buses that enable remote software updates and diagnostics, which can be reasonably inferred as AI systems managing vehicle control and maintenance. Although no actual harm or misuse has occurred, the test demonstrates a plausible risk that these AI capabilities could be exploited to cause harm, such as stopping buses remotely, potentially endangering passengers. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if the remote control capability were misused or hacked. The article focuses on the potential risk and ongoing mitigation efforts rather than an actual incident, so it is not an AI Incident. It is more than complementary information because it reports a credible security risk revealed by testing, not just a response or update.