US to Propose Rules Restricting Chinese Vehicle Software

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Commerce Department will issue proposed export controls next month to limit key connected-vehicle software components from China and other adversaries. Export controls chief Alan Estevez said critical driver-management and data-handling systems must be produced in allied nations to mitigate national security risks of malicious or faulty remote access.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems insofar as connected vehicles rely on sophisticated software managing driving systems and data, which can be reasonably inferred to include AI components for autonomous or semi-autonomous functions and data processing. The U.S. government's concern about national security risks and the possibility of software being disabled leading to catastrophic outcomes indicates a plausible future harm scenario. Since no actual harm has yet occurred but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The focus is on potential harm from the use or misuse of AI-enabled vehicle software, not on a realized incident or harm.[AI generated]
AI principles
Robustness & digital securitySafetyPrivacy & data governanceRespect of human rightsAccountability

Industries
Mobility and autonomous vehiclesDigital securityGovernment, security, and defenceRobots, sensors, and IT hardware

Harm types
Physical (injury)Physical (death)Human or fundamental rightsPublic interestEconomic/PropertyReputational

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

US to issue proposed rules limiting Chinese vehicle software in August

2024-07-17
Reuters
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as connected vehicles rely on sophisticated software managing driving systems and data, which can be reasonably inferred to include AI components for autonomous or semi-autonomous functions and data processing. The U.S. government's concern about national security risks and the possibility of software being disabled leading to catastrophic outcomes indicates a plausible future harm scenario. Since no actual harm has yet occurred but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The focus is on potential harm from the use or misuse of AI-enabled vehicle software, not on a realized incident or harm.
Thumbnail Image

US to issue rules to limit China vehicle software

2024-07-17
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in connected vehicles, specifically software managing vehicle functions and data. The US government is proposing export controls to limit software from China due to national security risks, including the possibility of software being disabled or compromised, which could lead to harm. No actual harm has been reported yet, but the credible risk of future harm from malicious or faulty AI software in vehicles justifies classification as an AI Hazard. The article does not describe an incident with realized harm, nor is it merely complementary information or unrelated news.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-04
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article discusses a planned regulatory proposal that would bar certain AI-enabled software from Chinese sources in autonomous vehicles, which could plausibly lead to preventing AI-related harms in critical infrastructure (transportation). Since no actual harm or incident has occurred yet, and the focus is on a future regulatory measure to address potential risks, this qualifies as an AI Hazard. It is not an AI Incident because no harm has been realized, nor is it Complementary Information since it is not an update or response to a past incident. It is not unrelated because it clearly involves AI systems in autonomous vehicles and their regulation.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles - ET Auto

2024-08-05
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, as it discusses autonomous vehicles with Level 3 automation, which rely on AI for driving tasks. The event concerns the use and development of AI software in these vehicles and the national security risks posed by software from certain foreign entities. However, no actual harm or incident has occurred yet; the article describes a proposed rule to prevent potential future harms related to cybersecurity, data privacy, and control of autonomous vehicles. Therefore, this is an AI Hazard, as the development and use of AI systems in autonomous vehicles could plausibly lead to harms such as breaches of security or control, but these harms have not yet materialized.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems through the mention of Level 3 autonomous vehicles and connected vehicle software, which rely on AI for automation and communication. The U.S. government's proposed ban is motivated by concerns over national security risks, which implies potential future harms if such software were allowed. However, no actual harm or incident has occurred yet, and the article centers on the planned regulatory action and international discussions addressing these risks. This fits the definition of Complementary Information, as it details governance responses and risk management efforts related to AI systems in autonomous vehicles, rather than reporting a direct or indirect AI Incident or an AI Hazard where harm is plausible but not yet addressed.
Thumbnail Image

US wants to take its Chinese software ban to this industry now - Times of India

2024-08-06
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of connected and autonomous vehicle software, which are AI systems by definition due to their advanced automation capabilities and decision-making functions. The US government's proposed ban is motivated by concerns that such AI systems could be exploited to cause harm, such as unauthorized data collection or control interference, which could lead to injury, disruption, or violations of rights. Since the harm is potential and the event is about a regulatory proposal to prevent such harm, this qualifies as an AI Hazard rather than an Incident or Complementary Information. The article does not report any realized harm or incident, nor is it a response or update to a past event, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems and their risks.
Thumbnail Image

US expected to propose barring Chinese software in AVs

2024-08-06
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article discusses a forthcoming proposal to restrict software from certain foreign entities in autonomous vehicles, which are AI systems. While this reflects concerns about potential risks, no actual harm or incident has occurred yet. Therefore, this is best classified as an AI Hazard, as the proposal addresses plausible future risks related to AI system use in vehicles, rather than an AI Incident or Complementary Information.
Thumbnail Image

US Moves to Ban China Software in Autonomous Cars, Reuters Says

2024-08-05
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles (Level 3 automation and above implies AI-driven decision-making). The US government's proposed ban is motivated by concerns that the use of Chinese AI software could lead to data security breaches, which could plausibly lead to harm (e.g., violations of privacy, national security risks). However, the article does not report any realized harm or incident, only a potential risk leading to a regulatory proposal. Therefore, this qualifies as an AI Hazard, as the development or use of AI systems could plausibly lead to harm but no incident has occurred yet.
Thumbnail Image

US expected to ban Chinese software in self-driving vehicles

2024-08-05
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in autonomous vehicles and highlights concerns about potential misuse or security risks that could plausibly lead to harm, such as unauthorized surveillance or vehicle control. Since no actual harm or incident has occurred yet, and the focus is on a proposed ban to mitigate future risks, this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated, as the event directly concerns AI system risks and regulatory responses.
Thumbnail Image

US Reportedly Will Ban Key Chinese Software in Autonomous Vehicles

2024-08-05
Investopedia
Why's our monitor labelling this an incident or hazard?
The article explicitly references software enabling Level 3 autonomous driving, which involves AI systems capable of controlling vehicles with limited human intervention. The U.S. government's planned ban is a response to potential national security risks, indicating a credible concern that these AI-enabled systems could lead to harm if allowed. Since no actual harm or incident has occurred yet, but the risk is credible and recognized by authorities, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the potential risk and regulatory response, not on updates or responses to past incidents. It is not Unrelated because AI systems are central to the autonomous vehicle software in question.
Thumbnail Image

US to Call for Limits on Chinese Vehicle Software Over Data Security Concerns

2024-08-06
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article discusses a policy proposal aimed at limiting the use and testing of Chinese autonomous vehicle software in the US due to security concerns. The software involved is AI-based, given its role in autonomous and connected vehicles. No actual harm has been reported yet, but the concerns imply a plausible risk of harm if such software were used unchecked. Therefore, this event is best classified as an AI Hazard, reflecting a credible potential for harm related to AI system use in vehicle software.
Thumbnail Image

US Moves to Ban China Software in Autonomous Cars, Reuters Says

2024-08-05
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article discusses a forthcoming proposal to ban certain AI software in autonomous vehicles due to security concerns, which implies a plausible risk of harm if such software were allowed. Since the ban is not yet in effect and no harm has been reported, this constitutes an AI Hazard rather than an Incident. The involvement of AI is clear because Level 3 automation in vehicles requires AI systems for autonomous driving functions.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles - VnExpress International

2024-08-06
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles and connected vehicle software developed by Chinese companies. The U.S. administration's planned rule is motivated by concerns that these AI systems could be exploited to cause significant harms, including unauthorized data collection, surveillance, or control of vehicles, which constitute national security risks. Since the rule is a preventive action addressing credible risks of harm from the use of these AI systems, this qualifies as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The event is more than general AI news or policy discussion, as it focuses on a specific regulatory response to plausible AI-related risks.
Thumbnail Image

U.S. Moves to Ban Chinese Software in Autonomous Vehicles | Technology

2024-08-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article discusses a planned regulatory measure targeting AI-enabled autonomous vehicle software from Chinese manufacturers, which could plausibly lead to impacts on the deployment and use of such AI systems in the U.S. This is a potential future risk scenario related to AI system use and governance, but no actual harm or incident has occurred yet. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

US may propose barring Chinese software in autonomous vehicles: Report

2024-08-05
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles and connected vehicle software, which are AI systems by definition. The article focuses on the planned regulatory response to potential national security and privacy harms that could arise from the use of Chinese-developed AI software in these vehicles. Since no actual harm or incident has yet occurred, but there is a credible and plausible risk of significant harm (e.g., unauthorized data collection, surveillance, vehicle control risks), this qualifies as an AI Hazard. The article does not describe a realized AI Incident or a complementary information update about a past incident, but rather a credible potential future harm prompting regulatory action.
Thumbnail Image

US Expected to Propose Barring Chinese Software in Autonomous Vehicles

2024-08-04
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles and connected vehicle software, which are explicitly mentioned. The U.S. government is proposing regulatory action due to national security risks associated with these AI-enabled systems, including potential unauthorized monitoring or control of vehicles. No actual harm or incident is reported yet, but the concerns and proposed ban indicate a credible risk of future harm. Hence, this is an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the proposed rule and associated risks, not on responses to past incidents. It is not Unrelated because AI systems are central to the issue.
Thumbnail Image

US Reported to Mull Banning Chinese Software in Self-Driving and Connected Vehicles-钛媒体官方网站

2024-08-07
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous and connected vehicles, specifically software enabling Level 3 automation and advanced wireless communications, which are AI systems by definition. The U.S. government's planned ban is motivated by concerns over national security risks, implying plausible future harms related to the use of these AI systems. Since the article does not report any actual harm or incident caused by these AI systems but rather a regulatory response to potential risks, it fits the definition of an AI Hazard. The event is not Complementary Information because it is not an update or response to a past incident but a new proposed regulatory measure addressing potential future risks. It is not unrelated because it clearly involves AI systems and their potential harms.
Thumbnail Image

US Prepares To Ban Chinese EV Software With Level 3 Automation On All Roadways, Citing National Security Risks

2024-08-05
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of Level 3 autonomous vehicle software and connected vehicle technologies. The event concerns the use and potential misuse of such AI systems, specifically software developed by Chinese entities, which the US government believes could pose national security risks. Since no harm has yet occurred but the risk is credible and the government is proposing a ban to prevent such harm, this fits the definition of an AI Hazard. It is not an AI Incident because no realized harm or incident is reported. It is not Complementary Information because the article is not about responses to a past incident but about a new proposed regulation addressing potential future harm. It is not Unrelated because the event clearly involves AI systems and their potential risks.
Thumbnail Image

Biden Administration Targets Chinese Tech in Autonomous Cars with Proposed Ban

2024-08-04
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article discusses a forthcoming ban on Chinese software in autonomous vehicles due to national security concerns, which relates to the use of AI systems in these vehicles. However, there is no mention of any actual harm, malfunction, or incident caused by these AI systems. The event is about a proposed rule aiming to mitigate potential risks, thus it fits the category of Complementary Information as it provides governance context and response to AI-related concerns without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

US to ban Chinese software in self-driving cars amid national security fears

2024-08-06
The Star
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicle software) and concerns about their use and data handling, which could plausibly lead to harm related to national security (a form of harm to communities and critical infrastructure). Since the article focuses on the potential risk and regulatory measures to prevent harm rather than describing an actual harm event, it fits the definition of an AI Hazard. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential threat and regulatory proposal, not on updates or responses to past incidents.
Thumbnail Image

U.S. to ban Chinese software in self-driving cars amid national security fears

2024-08-05
Fortune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles (Level 3 and above) that use advanced software for self-driving and data collection. The U.S. government's proposed ban is motivated by concerns that the use of Chinese AI software could lead to national security harms through data misuse. Since no actual harm or incident has been reported, but a credible risk is identified and regulatory action is being prepared to mitigate it, this fits the definition of an AI Hazard. The article does not describe a realized AI Incident or complementary information about a past incident, nor is it unrelated to AI. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

The U.S. recently held a meeting with major allies to discuss the national security risks of connected vehicles

2024-08-06
Carscoops
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of Level 3 and above self-driving vehicle software, which are AI systems by definition. The event centers on the potential risks these AI systems could pose, such as unauthorized data collection, vehicle control, and national security threats. However, no direct or indirect harm has occurred yet; the discussion is about preventing possible future harms. Therefore, this qualifies as an AI Hazard because the development and use of these AI systems could plausibly lead to significant harms if not regulated properly. It is not an AI Incident since no harm has materialized, nor is it Complementary Information or Unrelated, as the focus is on the potential risks and regulatory responses to AI systems in connected vehicles.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicle software with level 3 automation and connected vehicle technologies. The US government's planned regulatory proposal is a response to credible national security concerns that such AI systems could be exploited or pose risks, including data privacy and control of vehicles. Since the harms are not yet realized but are plausible and significant, this event qualifies as an AI Hazard. There is no indication of an actual AI Incident or realized harm, nor is the article primarily about complementary information or unrelated news.
Thumbnail Image

Chinese self-driving and connected vehicle software facing U.S. ban

2024-08-06
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous and connected vehicles, which are AI systems by definition. The U.S. Commerce Department's proposed ban is motivated by concerns over national security risks associated with these AI-enabled systems. Since the ban is not yet in effect and no harm has been reported as having occurred, the event does not meet the criteria for an AI Incident. Instead, it reflects a credible potential for harm (national security risks) if such software were used unchecked, fitting the definition of an AI Hazard. The article does not focus on responses to past incidents or broader ecosystem updates, so it is not Complementary Information. It is not unrelated because it clearly concerns AI systems and their regulation.
Thumbnail Image

U.S. Commerce Department to Propose Ban on Chinese Software in Self-Driving Cars | Technology

2024-08-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but rather a proposed rule to prevent potential national security risks from Chinese software in autonomous vehicles. The focus is on future risk mitigation and regulatory governance, which fits the definition of Complementary Information as it provides context on societal and governance responses to AI-related risks. There is no direct or indirect harm reported, nor a plausible immediate hazard event described, only a policy proposal addressing potential future risks.
Thumbnail Image

U.S. Weighs Ban on Chinese Software in Autonomous Vehicles | Technology

2024-08-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article focuses on a proposed rule that aims to prevent potential national security risks from the use of Chinese AI software in autonomous vehicles. There is no indication that any harm has yet occurred, only that the government is acting to mitigate plausible future risks. Therefore, this event qualifies as an AI Hazard because it involves the plausible future risk of harm related to AI systems in autonomous vehicles, but no actual incident or harm has been reported.
Thumbnail Image

U.S. to Prohibit Chinese Software in Autonomous Vehicles | Technology

2024-08-05
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm caused by the Chinese software in autonomous vehicles but highlights credible national security risks and potential vulnerabilities that could lead to harm if exploited. The focus is on a proposed ban to mitigate these risks before any incident occurs. Therefore, this event represents a plausible future risk (AI Hazard) rather than an actual incident. It is not merely general AI news or product launch, but a governance response to a credible AI-related security threat, so it is not Complementary Information either.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
The Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous vehicles and connected vehicle software, which are AI systems due to their autonomous and connected functionalities. The US government's planned rule aims to prevent potential national security risks and data privacy harms that could arise from the use of Chinese-developed AI software in these vehicles. Since no actual harm has occurred yet but the risk is credible and significant, the event fits the definition of an AI Hazard. It is not an AI Incident because no realized harm is reported, nor is it Complementary Information or Unrelated, as the focus is on a regulatory proposal addressing plausible AI-related risks.
Thumbnail Image

US Moves to Ban China Software in Autonomous Cars, Reuters Says

2024-08-05
Financial Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous vehicle software developed by Chinese entities, which involves AI systems for vehicle autonomy and connectivity. The US government's concern is about potential data collection and transmission risks, which could lead to violations of privacy and national security harms. Since no actual harm has been reported yet, but the risk is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident. The focus is on preventing potential future harm from the use of these AI systems.
Thumbnail Image

US weighs restricting Chinese technologies in autonomous cars

2024-08-07
ynetnews
Why's our monitor labelling this an incident or hazard?
The article discusses the potential security risks associated with AI-enabled connected vehicles and the governmental response to these concerns. There is no indication of an actual incident or harm caused by AI systems, only a plausible risk being addressed through policy discussions. Therefore, this qualifies as an AI Hazard, as the development and use of AI in autonomous vehicles could plausibly lead to security-related harms in the future.
Thumbnail Image

U.S. expected to propose barring Chinese software in autonomous vehicles

2024-08-05
Automotive News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous vehicles and connected vehicle software, which are AI systems by definition. The U.S. government's proposed rule aims to prevent potential national security risks associated with these AI systems developed by Chinese entities. Since no actual harm or incident has occurred yet, but the risks are credible and plausible, this constitutes an AI Hazard. The event is not a Complementary Information piece because it is not an update or response to a past incident but a new proposed regulatory action addressing potential future harm. It is not an AI Incident because no harm has been realized or reported.
Thumbnail Image

Understanding the impact of autonomous vehicle regulation on China's automated and autonomous vehicle industry

2024-08-06
autotechinsight.ihsmarkit.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and use of AI systems in autonomous vehicles and related infrastructure, but it does not report any realized harm or incidents caused by these AI systems. Instead, it focuses on the proactive regulatory environment and investments to promote autonomous vehicle technology. There is no indication of direct or indirect harm, nor a specific event where AI malfunction or misuse has led to harm. The content is primarily about the ecosystem and governance context, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

US prepares to ban Chinese tech in self-driving cars

2024-08-05
TRT World
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in autonomous vehicles and connected vehicle software, which are AI systems by definition. The US government's proposed ban is a preventive regulatory measure addressing potential risks associated with foreign AI software in critical infrastructure (vehicles). Since no actual harm or incident has occurred yet, but there is a plausible risk that the banned software could lead to harm, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is the proposed ban itself, not a response to a past incident. It is not an AI Incident because no harm has materialized.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
The Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as autonomous vehicles with Level 3 automation rely on AI software for driving and connected vehicle functions. The U.S. government's proposed ban is motivated by concerns that Chinese-developed AI software could be used maliciously or could lead to breaches of data privacy and national security, which are forms of harm to communities and potentially critical infrastructure. Since the harm is not yet realized but is plausible and credible, this event qualifies as an AI Hazard. It is not an AI Incident because no actual harm has been reported yet, and it is not Complementary Information or Unrelated because the focus is on a specific regulatory response to a credible AI-related risk.
Thumbnail Image

US eyes Chinese software ban for autonomous cars - Taipei Times

2024-08-05
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous vehicles and connected vehicle software, which are AI-based systems. The US government is proposing a ban due to national security concerns related to data privacy, surveillance, and control risks associated with these AI systems. No actual harm or incident has been reported yet; the focus is on preventing potential future harms. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to harms such as violations of privacy, unauthorized control, or disruption of critical infrastructure if the AI systems were used maliciously or malfunctioned. The event is not Complementary Information because it is not an update or response to a past incident but a new proposed regulatory action based on risk assessment. It is not Unrelated because it clearly involves AI systems and their potential risks.
Thumbnail Image

US eyes barring China software in vehicles

2024-08-05
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous and connected vehicles, particularly software enabling Level 3 automation and advanced wireless communications. The US government is proposing to bar such software from Chinese entities due to national security risks, including potential unauthorized data collection and vehicle control. No actual harm is reported yet, but the credible risk of harm to national security and privacy from these AI systems' use justifies classification as an AI Hazard. The event is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated, as it directly concerns AI system risks and regulatory responses.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article discusses a forthcoming regulatory proposal targeting AI systems in autonomous vehicles, specifically banning Chinese-developed software. While no harm has yet occurred, the proposal is motivated by concerns about potential risks to critical infrastructure and national security. Therefore, this event represents an AI Hazard, as the development and use of such AI systems could plausibly lead to harm if not regulated. There is no indication of realized harm or incident at this stage, so it is not an AI Incident. It is more than general AI news or policy commentary, so it is not Complementary Information or Unrelated.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles - New Delhi Times - India's Only International Newspaper - Empowering Global Vision, Empathizing with India

2024-08-05
New Delhi Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles (Level 3 automation and above) and connected vehicle software, which are AI systems by definition. The U.S. government's planned rule is a response to credible national security risks, including potential misuse of AI-enabled vehicle systems for surveillance or control, which could lead to harms such as violations of privacy and national security breaches. Since the article discusses a proposed rule to prevent these risks before harm occurs, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information but a concrete policy response to plausible future harms from AI systems in autonomous vehicles.
Thumbnail Image

US expected to ban Chinese software in autonomous cars

2024-08-05
The Standard
Why's our monitor labelling this an incident or hazard?
The article involves AI systems as it concerns software in autonomous vehicles with Level 3 automation and above, which inherently involves AI for vehicle control and connectivity. However, the event is about a proposed regulatory ban to prevent potential national security risks, not about an actual incident or harm caused by AI systems. Therefore, it is a governance response to a potential AI hazard rather than an incident or hazard itself. It fits the definition of Complementary Information as it provides context on societal and governance responses to AI-related risks.
Thumbnail Image

US Reportedly Moves to Ban China Software in Autonomous Cars

2024-08-05
Transport Topics
Why's our monitor labelling this an incident or hazard?
The article discusses a potential regulatory measure aimed at mitigating national security risks from AI-enabled autonomous vehicle software developed in China. While no incident or harm has been reported, the concern is about plausible future harm from data collection and transmission by AI systems in connected vehicles. This fits the definition of an AI Hazard, as the development and use of AI systems in autonomous vehicles could plausibly lead to harms related to security and privacy. The event is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated, as it directly concerns AI system risks and regulatory responses.
Thumbnail Image

Potential US ban on Chinese vehicle tech to breed longer-term effects on tech rift

2024-08-08
DIGITIMES
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles (Level 3 and above) and concerns about their software's origin and security. However, no actual harm or incident has occurred yet; the article centers on a proposed regulation aimed at preventing potential national security risks. This fits the definition of an AI Hazard, as the development and use of AI systems in vehicles could plausibly lead to harms related to cybersecurity and geopolitical tensions. There is no indication of an AI Incident or Complementary Information, and the event is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

US to propose banning Chinese software in autonomous vehicles

2024-08-05
Verdict
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through autonomous vehicle software at Level 3 automation and above, which rely on AI for operation. The US government's proposed ban is motivated by concerns over national security risks, particularly data collection and sensitive information handling by Chinese companies. No actual harm or incident has occurred yet; the event is about a regulatory proposal to mitigate potential future risks. Hence, it fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm, but no direct or indirect harm has yet materialized.
Thumbnail Image

US Set to Ban Chinese Software in Autonomous Vehicles Over Security Concerns

2024-08-07
News9live
Why's our monitor labelling this an incident or hazard?
The article discusses a regulatory action intended to prevent potential future harms related to the use of AI systems (autonomous vehicle software) from certain foreign companies deemed security risks. No actual harm or incident has been reported; rather, the regulation is a preventive measure addressing plausible future risks. The AI system involvement is clear (autonomous vehicle software with Level 3 automation and above), and the focus is on mitigating national security risks that could arise from such software. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harm from AI system use, not an AI Incident or Complementary Information.
Thumbnail Image

Biden Administration Targets Chinese Tech in Autonomous Cars with Proposed Ban

2024-08-04
quiverquant.com
Why's our monitor labelling this an incident or hazard?
The article discusses a forthcoming rule intended to prevent potential national security risks from the use of Chinese software in autonomous vehicles. While autonomous vehicles with Level 3 automation and above involve AI systems, the event centers on a proposed ban to mitigate plausible future threats rather than an incident where harm has already occurred. Therefore, this qualifies as an AI Hazard because it concerns credible potential harm from AI system use in connected vehicles, but no realized harm or incident is reported.
Thumbnail Image

US expected to propose barring Chinese software in autonomous vehicles

2024-08-05
BusinessWorld
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of advanced autonomous vehicle software and connected vehicle technologies. The event concerns the development and use of such AI systems and the potential for these systems to be exploited or to cause harm related to national security and privacy. Since no actual harm has occurred yet but there is a credible risk that the use of such software could lead to harms such as unauthorized data collection, vehicle control risks, or espionage, this qualifies as an AI Hazard. The event is about a proposed rule to mitigate these plausible future harms, not about an incident where harm has already occurred. Therefore, the classification is AI Hazard.
Thumbnail Image

林世雄:8輛自動車進行道路測試 試坐自動駕駛穿梭小巴表現不俗

2024-08-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving technology) in active use and testing. However, there is no indication of any injury, disruption, rights violation, property or community harm, or other significant harm caused or occurring. The article focuses on the development and testing progress and the positive performance of the autonomous shuttle bus, without any reported malfunction or harm. Therefore, this is not an AI Incident or AI Hazard but rather a general update on AI system deployment and testing, which fits the definition of Complementary Information.
Thumbnail Image

本港無人駕駛技術達第4級別 8輛自動車進行道路測試

2024-08-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving technology at Level 4 automation, which qualifies as AI systems under the definitions. However, it does not describe any realized harm or incident resulting from the use or malfunction of these AI systems. Neither does it highlight any credible risk or hazard that could plausibly lead to harm. Instead, it focuses on progress, testing, and government support, which fits the description of Complementary Information as it provides contextual and developmental updates without reporting an incident or hazard.
Thumbnail Image

假車牌太氾濫! 「AI巡防」3個月揪出151件

2024-08-10
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for patrol and automatic recognition of fake license plates. The AI system's use has directly led to the detection and apprehension of individuals using fake plates, which is a violation of law and poses risks to public safety. Therefore, this event involves the use of an AI system that has directly led to harm prevention and law enforcement, qualifying it as an AI Incident under the category of violations of applicable law and harm to community safety.
Thumbnail Image

桃園警AI系統查"變造車牌" 五月至今抓82件違規

2024-08-10
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being developed and deployed by the police to detect forged license plates. The AI system's use has directly led to the identification and apprehension of offenders involved in license plate forgery, which constitutes a violation of law and harms property rights and lawful vehicle owners. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (illegal activity detection and prevention) and enforcement actions against violations.
Thumbnail Image

打擊假車牌犯罪 桃園警方以AI巡防系統精準出擊

2024-08-10
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for automatic recognition and alerting of vehicles with fake or altered license plates. The system's deployment has directly led to increased detection and apprehension of offenders, thereby reducing risks to public safety on the roads. This constitutes an AI Incident because the AI system's use has directly contributed to preventing harm to people by enabling law enforcement to act against dangerous illegal driving practices. The harm addressed relates to injury or harm to persons (road users) and public safety, fitting the definition of an AI Incident.
Thumbnail Image

賓士在北京「進行Level 4自動駕駛輔助測試」!國際車廠第1家通過 | ETtoday車雲 | ETtoday新聞雲

2024-08-10
ETtoday車雲
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically Level 4 autonomous driving technology, which fits the definition of an AI system. However, there is no indication that the development or use of these AI systems has led to any injury, disruption, rights violations, or other harms. Nor does the article suggest a plausible risk of harm occurring imminently from these tests. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI development and testing activities in the automotive sector without reporting harm or credible risk of harm.
Thumbnail Image

無人駕駛|8輛自動車正於元朗錦綉花園等5處試行 林世雄親身試坐形容表現不俗 - 香港經濟日報 - TOPick - 新聞 - 社會

2024-08-10
TOPick 新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles with level 4 automation) in development and testing phases. However, there is no indication of any realized harm, malfunction, or misuse leading to injury, rights violations, or other harms. The article is primarily informative about the state of autonomous vehicle testing and government support, without reporting any incident or plausible imminent harm. Therefore, it fits the category of Complementary Information, providing context and updates on AI system deployment and ecosystem development.
Thumbnail Image

錦綉花園月內試行無人駕駛小巴 - 20240811 - 港聞

2024-08-10
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically an autonomous driving system at level 4 automation, which is capable of high automation without human intervention in certain conditions. However, the article does not report any harm or incident resulting from the use or malfunction of this AI system. Instead, it describes ongoing testing and development, as well as future plans and funding. There is no indication of injury, rights violations, property damage, or other harms. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context and updates on AI system deployment and government support in the autonomous vehicle domain.
Thumbnail Image

林世雄:8輛自動車正道路測試 錦綉花園自動穿梭小巴本月內居民可乖搭 (16:31) - 20240810 - 港聞

2024-08-10
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article discusses the development and testing of AI-based autonomous vehicles and related projects, which involve AI systems. However, there is no indication of any harm, malfunction, or misuse occurring or having occurred. The content is primarily about ongoing development, testing, and government support, which provides context and updates on the AI ecosystem rather than reporting an incident or hazard. Therefore, it fits the definition of Complementary Information.
Thumbnail Image

林世雄:8輛自動車進行道路測試 試坐自動駕駛穿梭小巴表現不俗 - RTHK

2024-08-10
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving technology) in real-world testing, but there is no indication of any injury, rights violation, disruption, or other harm caused or likely to be caused by these AI systems. The article focuses on the development and testing progress, with positive performance noted, and no mention of incidents or hazards. Therefore, this is best classified as Complementary Information, as it provides contextual and developmental updates on AI systems without reporting any harm or risk.
Thumbnail Image

無人駕駛技術達第4級別 8輛自動車進行道路測試

2024-08-10
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically Level 4 autonomous driving technology, which qualifies as AI systems under the definition. However, the event described is the testing and development phase without any reported injury, disruption, rights violation, or other harm. There is also no indication of plausible future harm or risk arising from these tests as described. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI development and deployment progress, which fits the definition of Complementary Information.
Thumbnail Image

合眾新能源車在港設研發中心 推動智能駕駛、智能車載發展

2024-08-09
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and intended use of AI systems in intelligent driving and smart vehicle technologies but does not report any realized harm or incidents resulting from these AI systems. It focuses on the company's plans, technological ambitions, and industry forecasts without mentioning any direct or indirect harm caused by AI system development or use. Therefore, this is a general AI-related news item about AI system development and deployment plans, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

本港正測試8輛自動車 林世雄試坐自動駕駛穿梭小巴:表現不俗

2024-08-10
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles with Level 4 automation) in active testing. However, there is no indication of any realized harm or incident caused by these AI systems. The article does not describe any direct or indirect harm, nor does it suggest plausible future harm beyond normal testing activities. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is primarily an update on AI system development and testing, which fits the category of Complementary Information.
Thumbnail Image

特斯拉自駕計程車、人形機器人都還遠在天邊,馬斯克的 AI 大夢挽不回投資人信心

2024-08-11
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (autonomous driving and humanoid robots) and their development delays, which are relevant to AI. However, there is no indication that these AI systems have caused any harm or that there is a plausible risk of harm occurring imminently. The focus is on financial performance, investor confidence, and product timeline delays, which are important contextual information but do not constitute an incident or hazard. Hence, the article fits the definition of Complementary Information as it provides supporting data and context about AI system development and market impact without describing a new AI Incident or AI Hazard.
Thumbnail Image

自動車發展從未停步

2024-08-10
news.gov.hk 香港政府新聞網
Why's our monitor labelling this an incident or hazard?
The article discusses the development and testing of AI systems (autonomous vehicles) and their potential benefits but does not describe any realized harm, malfunction, or incidents caused by these AI systems. There is no indication of direct or indirect harm, nor any plausible imminent harm described. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI system deployment and governance efforts, which fits the definition of Complementary Information.
Thumbnail Image

美國擬禁止自動駕駛汽車用中國軟件

2024-08-09
seattlechinesepost.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles and connected car software, specifically software developed by Chinese entities. The U.S. government is acting to prevent potential harms related to national security, privacy, and control over vehicles, which are critical infrastructure. Since the article does not report any realized harm but focuses on preventing plausible future harms, it fits the definition of an AI Hazard. The involvement is in the use and potential misuse of AI systems in autonomous vehicles, and the regulatory response is aimed at mitigating these risks.
Thumbnail Image

美國據報擬禁止自動駕駛汽車中使用中國軟件

2024-08-05
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous driving software) and discusses regulatory measures to restrict their use due to concerns about foreign software in critical infrastructure. No actual harm or incident is reported, so it is not an AI Incident. The focus is on preventing potential future harm, making it an AI Hazard. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI systems and their governance.
Thumbnail Image

傳美國商務部將提案 禁止自動汽車使用中國軟體

2024-08-05
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as autonomous vehicles rely on AI software for operation. The proposal is motivated by concerns over national security risks that could plausibly lead to harm if such software were used. However, since the proposal is not yet implemented and no harm has occurred, this constitutes a potential risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard, as it concerns plausible future harm from AI system use in autonomous vehicles.
Thumbnail Image

限制中國聯網汽車軟體 路透:美商務部將公布新規提議

2024-08-05
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous and connected vehicles, which are explicitly mentioned. The U.S. government is proposing rules to restrict the use of certain AI software from China due to national security risks, indicating a plausible risk of harm if such software were used. Since no actual harm or incident has occurred yet, and the article focuses on a forthcoming regulatory proposal to mitigate potential risks, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

封殺中製汽車再延伸 美擬禁止自駕汽車用中國軟體 - 自由財經

2024-08-05
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving software) and concerns about their use leading to potential security risks (data leakage). Since no harm has yet occurred but there is a credible risk that the use of Chinese AI software in autonomous vehicles could lead to harm (e.g., data breaches affecting citizens and infrastructure), this qualifies as an AI Hazard. The article focuses on a proposed regulatory measure to mitigate this plausible future harm, not on an incident where harm has already occurred.
Thumbnail Image

美自駕車擬禁陸軟體 陸籲公平競爭 | 聯合新聞網

2024-08-05
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving software at Level 3 and above) and concerns their use and development. The U.S. government's proposed ban is motivated by national security concerns, indicating a plausible risk that these AI systems could lead to harms if allowed unrestricted use. However, no actual harm or incident has been reported; the article discusses potential risks and regulatory responses. Thus, it fits the definition of an AI Hazard, as it plausibly could lead to harms related to security or operational risks in autonomous vehicles, but no incident has yet occurred.
Thumbnail Image

路透:美將提議禁止自駕車使用中國大陸軟體 | 聯合新聞網

2024-08-05
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous driving software and connected vehicle technology, which are AI-enabled systems. However, the article does not describe any realized harm or incident caused by these AI systems. Instead, it focuses on a proposed regulatory measure to prevent potential national security risks, which constitutes a plausible future risk scenario. Therefore, this event fits the definition of an AI Hazard, as it concerns the plausible future harm that could arise from the use of certain AI systems in vehicles, prompting preventive regulatory action.
Thumbnail Image

傳美國商務部將提案 禁止自動汽車使用中國軟體 | 國際焦點 | 國際 | 經濟日報

2024-08-05
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous driving and connected vehicle software, which are AI-enabled technologies. However, the article describes a proposed regulatory measure to mitigate national security risks before any harm has occurred. There is no report of an AI system malfunction, misuse, or harm caused by these systems at present. Therefore, this is best classified as an AI Hazard, as the proposal addresses plausible future harms related to AI systems in vehicles, rather than an AI Incident or Complementary Information.
Thumbnail Image

美國擬禁止自動駕駛汽車使用中國軟件 | 大紀元

2024-08-05
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving software and connected vehicle modules) and concerns about their use leading to national security risks, which can be considered harm to critical infrastructure and violation of rights. However, the article discusses a proposed ban and regulatory measures before any actual harm has occurred. Therefore, this is an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harms if unregulated, but no incident has yet materialized.
Thumbnail Image

美國聯網汽車擬議新規定 限制中國軟體使用 | 國際 | Newtalk新聞

2024-08-05
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly references autonomous driving software and smart vehicle systems, which are AI systems by definition. The concerns raised relate to potential privacy violations and national security risks from the use of such AI systems, which could plausibly lead to harms such as data breaches or espionage. Since the article focuses on proposed regulations to prevent these risks before harm occurs, it describes a credible potential for harm rather than an actual realized harm. Thus, it fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to incidents involving privacy and security harms in the future.
Thumbnail Image

美國擬禁止在自動駕駛汽車中使用中國軟件及在美測試中國自動駕駛汽車

2024-08-05
RFI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving and connected vehicle software) whose use is proposed to be restricted due to credible national security risks, including data privacy and vehicle control concerns. The article does not report any actual harm or incident caused by these AI systems but focuses on the plausible future harm that could arise if these systems are used unchecked. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and regulatory proposals, not on responses to past incidents or general AI ecosystem updates. It is not unrelated because it clearly involves AI systems and their potential risks.
Thumbnail Image

美自動駕駛汽車擬禁用中國軟件 憂國安風險

2024-08-05
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The article discusses a planned regulatory proposal to prohibit Chinese software in advanced autonomous and connected vehicles in the U.S. due to national security risks. The AI systems involved are autonomous driving and connected vehicle software, which qualify as AI systems. No actual harm or incident has occurred yet; the focus is on preventing potential security risks. This fits the definition of an AI Hazard, as it plausibly could lead to harm related to national security if such software were used. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since it concerns a specific AI-related risk and regulatory response.
Thumbnail Image

美憂國安風險 自動駕駛擬禁華軟件 - 20240806 - 中國

2024-08-05
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions autonomous driving and connected vehicle software developed by Chinese entities, which are AI systems. The US government is proposing a ban due to concerns about national security risks, including surveillance and control capabilities of these AI systems. Since no actual harm or incident has occurred yet, but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risk and proposed regulatory response, not on a past incident or ongoing harm.
Thumbnail Image

美國擬禁止自動駕駛汽車使用中國軟體(圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 時事 -

2024-08-05
看中国
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (advanced autonomous driving software) and discusses the U.S. government's intention to restrict their use due to national security concerns, which implies plausible future harm from these AI systems. There is no report of actual harm or incidents caused by these AI systems yet, only the potential for harm. Therefore, this qualifies as an AI Hazard, as the event concerns plausible future risks from the development and use of AI systems in autonomous vehicles, prompting regulatory action to mitigate these risks.
Thumbnail Image

美國醞釀禁止自動駕駛和網聯汽車使用中國軟件 | 電訊

2024-08-05
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous and connected vehicles, specifically software that likely includes AI components for driving and connectivity functions. However, the article describes a proposed ban and regulatory measures to prevent potential risks, not an actual incident or harm caused by AI. Therefore, this is an AI Hazard, as it concerns plausible future harm from AI systems in vehicles using software from certain foreign entities. It is not an AI Incident because no harm has occurred yet, nor is it Complementary Information or Unrelated.