Misuse and Malfunction of Driver Assistance AI Systems Cause Traffic Accidents in Taiwan and China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple incidents in Taiwan and China involved misuse or malfunction of AI-based driver assistance systems (such as AEB and autopilot), leading to traffic accidents with fatalities, injuries, and property damage. Courts ruled drivers responsible due to misunderstanding system limitations, highlighting risks of overreliance on AI in vehicles.[AI generated]

Why's our monitor labelling this an incident or hazard?

The assisted driving system qualifies as an AI system because it provides automated driving assistance. The accident was directly linked to the use of this AI system, as the driver did not pay sufficient attention due to reliance on the system, causing a collision. Although no physical injuries occurred, the collision caused harm to property, which fits the definition of an AI Incident. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
ConsumersGeneral public

Harm types
Physical (death)Physical (injury)Economic/Property

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

影音/雪隧車禍!開啟輔駕未注意前方施工車輛 自小客貨車撞上緩撞車 | yam News

2026-03-04
蕃新聞
Why's our monitor labelling this an incident or hazard?
The assisted driving system qualifies as an AI system because it provides automated driving assistance. The accident was directly linked to the use of this AI system, as the driver did not pay sufficient attention due to reliance on the system, causing a collision. Although no physical injuries occurred, the collision caused harm to property, which fits the definition of an AI Incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

公車裝ADAS仍難防突發... 婦人疑下車直穿車道遭輾斃 | 聯合新聞網

2026-03-02
UDN
Why's our monitor labelling this an incident or hazard?
The bus's ADAS is an AI system designed to assist in driving and collision prevention. The system's failure to detect and react in time to the pedestrian crossing directly contributed to the fatal accident, constituting harm to a person. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction (delayed detection and reaction) directly led to injury and death.
Thumbnail Image

公車事故 ADAS有科技盲區? 公運處:若「這幾點」恐難反應 | 聯合新聞網

2026-03-02
UDN
Why's our monitor labelling this an incident or hazard?
An ADAS is an AI system designed to assist driving by detecting obstacles and preventing collisions. The article indicates that the system may have failed to detect the pedestrian due to blind spots or rapid pedestrian movement, which are limitations of the AI system's sensing and response capabilities. Although the investigation is ongoing, the AI system's malfunction or limitation plausibly contributed to the harm (death of a pedestrian). This constitutes an AI Incident because the AI system's use and potential malfunction have directly or indirectly led to injury or harm to a person.
Thumbnail Image

公車起步狀況多 防撞系統有盲區 | 聯合新聞網

2026-03-02
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ADAS and collision avoidance systems) in a real-world scenario where a harm (death of a pedestrian) has occurred. The AI system's limitations and inability to prevent the accident are directly relevant. Since the AI system's malfunction or limitations have directly contributed to the harm, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to injury or harm to a person.
Thumbnail Image

車停門口遭砸車竟是AI偽造 網驚「還有這招」警搖頭:已觸法

2026-03-02
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake image (AI-generated forged image) that falsely depicted damage to a vehicle. This use of AI directly led to a legal incident involving false evidence and misinformation, which is a violation of law and thus a breach of obligations intended to protect rights and social order. The event involves the use of AI in a harmful way that has materialized into a legal incident, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

停靠區傳輾斃 北市公車站增科技執法 - 臺北市 - 自由時報電子報

2026-03-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of ADAS on buses and technology enforcement systems for traffic violations, which are AI systems used to improve safety. The fatality is a harm event but is not directly caused by malfunction or misuse of these AI systems; rather, the AI systems are part of the response to improve safety. There is no indication that the AI systems caused or contributed to the harm. The article focuses on the implementation of AI-based enforcement and safety measures following the incident, which fits the definition of Complementary Information as it details governance and technical responses to a prior harm event involving traffic safety. There is no new AI Incident or AI Hazard described.
Thumbnail Image

死角奪命? 下車婦繞至公車前遭輾斃 - 社會 - 自由時報電子報

2026-03-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the presence of an AI system (ADAS) designed to detect blind spots and prevent collisions. The system's failure to activate or alert the driver directly contributed to the fatal accident. The harm (death of a person) has occurred and is linked to the malfunction or non-performance of the AI system. Hence, this meets the criteria for an AI Incident because the AI system's malfunction directly led to injury and death.
Thumbnail Image

公車已裝死角警告系統 是否發揮作用待釐清 - 社會 - 自由時報電子報

2026-03-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The bus is equipped with an AI-based ADAS that detects the vehicle's surroundings and warns the driver, which qualifies as an AI system. The incident involves the use of this AI system during a fatal accident, where the system's role in preventing the harm is uncertain but relevant. Since the AI system's involvement is linked to a direct harm (death of a person) and the investigation concerns whether the system functioned properly, this qualifies as an AI Incident due to the direct or indirect contribution of the AI system to the harm.
Thumbnail Image

輔助駕駛又惹禍!國5雪隧深夜緩撞車遭撞 損失破百萬恐向肇事者求償 - 宜蘭縣 - 自由時報電子報

2026-03-04
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an assisted driving system, which is an AI system designed to aid driving tasks. The accident occurred because the driver, relying on this system, did not notice the construction vehicle, leading to a collision and significant property damage. Although no injuries occurred, the damage to the construction vehicle and the car is severe, with costs estimated over one million. The AI system's involvement in the use phase and its failure to prevent the accident (or the driver's overreliance on it) directly led to harm to property, fitting the definition of an AI Incident.
Thumbnail Image

台灣第一家獲交通部 VSTD 96、97 檢測機構認可 DEKRA 德凱助車廠迎戰 2028 資安新制 | 產業動態 | 商情 | 經濟日報

2026-03-04
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article discusses the accreditation of a testing body for vehicle cybersecurity and software update management standards, which are related to AI-enabled vehicle systems (e.g., OTA updates, cybersecurity management). However, it does not describe any realized harm or incident caused by AI systems, nor does it describe a specific event that could plausibly lead to harm. Instead, it focuses on regulatory compliance, testing capabilities, and industry preparedness, which are governance and ecosystem developments. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

警惕輔助駕駛化身馬路殺手

2026-03-03
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ADAS with AI-based functions like automatic emergency braking and adaptive cruise control) whose use and misuse have directly led to traffic accidents causing death, injuries, and property damage. The harm is realized and significant, fulfilling the criteria for an AI Incident. The article also discusses the legal and social implications of misunderstanding AI system capabilities, reinforcing the direct link between AI system use and harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

汽車被2大車夾擊、開自動駕駛意外 國一高雄端連3車禍1死│TVBS新聞網

2026-03-04
TVBS
Why's our monitor labelling this an incident or hazard?
The automatic driving assistance system qualifies as an AI system because it assists vehicle control autonomously. Its malfunction or misuse led to a collision, causing harm to property and potential risk to persons. This meets the criteria for an AI Incident since the AI system's use directly led to harm (collision and vehicle damage). Although the article mentions multiple accidents, only the one involving the AI system is relevant for classification. Therefore, this event is classified as an AI Incident.
Thumbnail Image

休旅車開自動輔助駕駛 國5雪隧「猛撞緩撞車」慘況曝│TVBS新聞網

2026-03-04
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an automatic driving assistance system (an AI system) by the driver, which failed to detect or respond appropriately to the construction vehicle ahead, resulting in a collision. Although no injuries occurred, the damage to the vehicle qualifies as harm to property. Therefore, this event meets the criteria for an AI Incident due to the AI system's malfunction or misuse directly causing harm.
Thumbnail Image

福特執行長認為軟體是當今汽車面臨的最大挑戰

2026-03-03
Gamereactor China
Why's our monitor labelling this an incident or hazard?
While the article mentions autonomous driving systems, which likely involve AI, it does not report any incident or hazard related to AI causing or potentially causing harm. The content is more about industry perspective and strategic challenges rather than a specific AI Incident or AI Hazard. Therefore, it is best classified as Complementary Information, providing context on AI's role in automotive software development and future challenges.
Thumbnail Image

死亡事故頻傳!第一視角揭大車「致命盲區」 | 生活 | 三立新聞網 SETN.COM

2026-03-02
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-related driver assistance systems (360-degree cameras, blind spot detection, advanced driver assistance systems) installed in large vehicles, which qualify as AI systems. However, the article does not describe a particular event where these AI systems directly or indirectly caused harm, nor does it describe a plausible future harm scenario beyond the known limitations of these systems. The focus is on explaining the existing problem of blind spots and the need for caution by both drivers and pedestrians. Therefore, this is best classified as Complementary Information, providing context and understanding about AI system use and limitations in real-world settings, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

馬斯克果真是「汽車業的川普」!鐵口直斷:不跟隨特斯拉步伐的車廠終將淘汰 | 地球黃金線

2026-03-03
地球黃金線
Why's our monitor labelling this an incident or hazard?
The article mentions AI and autonomous driving as part of Tesla's future focus, implying the use of AI systems. However, it does not report any realized harm, incident, or credible risk of harm caused by AI systems. It is an opinion piece and industry analysis rather than a report of an AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context and insight into AI's role in the automotive sector's evolution.
Thumbnail Image

降低人力需求! 日企鑽研自駕物流技術 解決缺工困境 - 民視新聞網

2026-03-02
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-enabled autonomous driving technology used in logistics vehicles, which qualifies as AI systems. The event is about development and testing (use) of these AI systems, with no current harm reported but clear potential for future harm (e.g., accidents, disruption). Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is more than general AI news or complementary information because it focuses on the plausible future risks of deploying these systems.
Thumbnail Image

開啟輔助駕駛系統未注意前方路況 國3又見撞擊緩撞車事故! | 聯合新聞網

2026-03-05
UDN
Why's our monitor labelling this an incident or hazard?
The assisted driving system qualifies as an AI system because it provides automated driving assistance. The accident resulted in actual physical harm to a person and damage to property, fulfilling the criteria for harm. The AI system's involvement was indirect, as the driver's reliance on the system and lack of attention caused the incident. Therefore, this event meets the definition of an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

輔助駕駛惹禍!雪隧深夜驚傳追撞工程緩撞車 換新費用恐破百萬 | 聯合新聞網

2026-03-04
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an 'assisted driving system' which is an AI system designed to aid driving but not fully autonomous. The driver relied on this system and failed to notice the stationary construction vehicle, leading to a collision causing property damage. This fits the definition of an AI Incident because the AI system's use indirectly led to harm (property damage). There is no indication that this is merely a potential risk or a response to a past incident, so it is not an AI Hazard or Complementary Information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

開輔助駕駛未注意前方 商務車國道撞工程緩撞車 - 社會 - 自由時報電子報

2026-03-05
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an assisted driving system, which is an AI system designed to aid driving tasks. The accident caused injury to the driver and vehicle damage, fulfilling the harm criteria. The police investigation considers the use of the assisted driving system and driver distraction as contributing factors. Hence, the AI system's involvement in the use phase led to realized harm, classifying this as an AI Incident.
Thumbnail Image

電動車稅收優惠引爭議 專家警告中國車有安全風險 | 中國電動車 | 澳洲 | 情報 | 大紀元

2026-03-05
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems embedded in Chinese electric vehicles' software that could be used maliciously to remotely control or surveil the vehicles, posing a credible security risk. Since no actual harm has been reported but a credible risk of harm is identified, this fits the definition of an AI Hazard. The article does not describe any realized injury, rights violation, or disruption caused by the AI systems, only a warning about potential future harm.
Thumbnail Image

【中東戰火】中企新石器無人車暫停在阿聯酋阿布扎比運營

2026-03-05
ET Net
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles are AI systems involved in delivery operations. The suspension is due to regional conflict and safety concerns, indicating a credible risk that continued operation could lead to harm (e.g., injury, asset damage). Since no harm has yet occurred but the risk is credible and the suspension is a preventive measure, this fits the definition of an AI Hazard. There is no indication of an actual incident or realized harm, nor is the article primarily about governance or research responses, so it is not Complementary Information. It is directly related to AI systems and plausible future harm, so it is not Unrelated.
Thumbnail Image

獨家影音/前員工控訴「數據用GPT掰的」 UBike公司發4點聲明澄清無造假│TVBS新聞網

2026-03-05
TVBS
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (GPT) is explicitly mentioned as being used to fabricate data. The alleged misuse of AI to generate false personnel attendance and work hour data directly relates to the submission of inaccurate information in a government environmental subsidy program, which constitutes a violation of obligations under applicable law and could harm public trust and environmental policy enforcement. Although the company disputes the fabrication of vehicle data and states the data is still under review, the claim of AI-generated false data for personnel records is a direct misuse of AI leading to harm. This fits the definition of an AI Incident because the AI system's misuse has directly led to a breach of obligations and potential harm to the integrity of the environmental program.