Fatal Crash Involving NIO ES8's Assisted Driving System in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese entrepreneur Lin Wenqin died in a traffic accident while driving a NIO ES8 with the Navigate on Pilot (NOP) assisted driving system active on a Fujian highway. The incident has raised concerns about the safety of AI-assisted driving features, with investigations ongoing and NIO stating the system is not fully autonomous.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the driver was using the NIO Pilot autonomous driving system when the fatal crash occurred. Autonomous driving systems are AI systems as they infer from sensor inputs to generate driving decisions. The accident caused direct harm (death) to a person, fulfilling the criteria for an AI Incident. Although the investigation is ongoing, the AI system's involvement in the use phase and the resulting harm are clear. Hence, this event is classified as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (death)

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

中國蔚來汽車自駕系統惹禍?31歲餐飲新銳喪命現場驚悚 - 自由財經

2021-08-15
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the driver was using the NIO Pilot autonomous driving system when the fatal crash occurred. Autonomous driving systems are AI systems as they infer from sensor inputs to generate driving decisions. The accident caused direct harm (death) to a person, fulfilling the criteria for an AI Incident. Although the investigation is ongoing, the AI system's involvement in the use phase and the resulting harm are clear. Hence, this event is classified as an AI Incident.
Thumbnail Image

蔚來汽車15天內2車禍 31歲餐飲新秀逝世 | 聯合新聞網:最懂你的新聞網站

2021-08-15
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system, the NIO Pilot autonomous driving system, which was activated at the time of the fatal accident. The death of the driver and severe vehicle damage constitute direct harm to persons and property. Although investigations are ongoing, the AI system's involvement is central to the incident, fulfilling the criteria for an AI Incident. The article does not merely speculate about potential harm but reports actual accidents with realized harm linked to the AI system's use.
Thumbnail Image

大陸蔚來ES8再傳死亡事故 自動駕駛監管浮上檯面 | 產業熱點 | 產業 | 經濟日報

2021-08-15
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based assisted driving system (NOP) during the fatal accident. Assisted driving systems like NOP rely on AI technologies such as Intel's Mobileye EyeQ4 chip for perception and decision-making. The death of the driver is a direct harm linked to the use of this AI system. Although the investigation is ongoing, the event meets the criteria for an AI Incident because the AI system's use has directly led to harm (death).
Thumbnail Image

31歲企業家駕蔚來電動車追尾身亡 意外前曾啟用自動駕駛功能

2021-08-15
香港01
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving assistance system (NOP) that was active during the accident. The accident caused fatal injury to a person, which fits the definition of harm to health. The AI system's use is directly linked to the incident, as the driver was using the AI-assisted driving feature when the collision occurred. Although the manufacturer states that NOP is not full autonomous driving, it is an AI system providing automated driving assistance, and its malfunction or limitations plausibly contributed to the crash. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

陸自動駕駛2周奪2命 31歲青年企業家啟用輔助領航肇禍去世 - 兩岸

2021-08-15
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO Pilot assisted driving system) that was active at the time of a fatal traffic accident, leading to the death of the driver. This meets the criteria for an AI Incident because the AI system's use directly led to injury or harm to a person. The article explicitly mentions the activation of the assisted driving feature and the resulting fatality, which is a clear harm. The ongoing investigation does not negate the fact that harm occurred with the AI system's involvement. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

「蔚來」領航輔助汽車爆意外 31歲內地餐飲界新貴喪生 - 香港經濟日報 - 中國頻道 - 經濟脈搏

2021-08-15
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing driver assistance. The accident occurred while the system was active, leading to a fatality, which is direct harm to a person. This meets the criteria for an AI Incident because the AI system's use directly led to injury or death. Although the system is not fully autonomous, its malfunction or limitations contributed to the harm. The article also references previous similar incidents involving NIO vehicles, reinforcing the pattern of harm linked to AI-assisted driving systems. Hence, the event is classified as an AI Incident.
Thumbnail Image

年輕才俊駕蔚來電動車追尾撞擊身亡 意外前曾開啟自駕功能

2021-08-15
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an advanced driver-assistance system with autonomous navigation features. The fatal accident occurred while this system was active, indicating the AI system's use is directly connected to the harm (death of the driver). Although the exact cause is under investigation, the AI system's involvement in the incident is clear and the harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and death.
Thumbnail Image

2021-08-15
hkcd.com
Why's our monitor labelling this an incident or hazard?
The involvement of an AI system (the autonomous driving system) is explicit, and its use directly led to a fatal injury, fulfilling the criteria for an AI Incident. The regulatory guidance mentioned is a response to such incidents and does not itself constitute a new incident or hazard. Therefore, the primary classification is AI Incident due to the realized harm caused by the AI system's use in autonomous driving.
Thumbnail Image

蔚来自动辅助驾驶车祸致死 官方回应:事发平原路段 相对平直 - IT 与交通 - cnBeta.COM

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system explicitly mentioned as the automatic assisted driving feature of the NIO vehicle. The use of this AI system directly led to a fatal injury, fulfilling the criteria for an AI Incident due to harm to a person caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident.
Thumbnail Image

蔚来自动辅助驾驶车祸致死!官方回应:事发平原路段 相对平直

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's automatic assisted driving) whose use directly led to a fatal accident, fulfilling the criteria for an AI Incident. The harm (death) is realized and directly linked to the AI system's operation during the crash. Although the investigation is ongoing, the AI system's involvement in the fatal outcome is clear from the description.
Thumbnail Image

美一好创始人林文钦因驾驶蔚来ES8发生事故去世

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The NOP feature is an AI system enabling autonomous driving. The accident occurred while this AI system was active, and it directly resulted in the driver's death. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person. The report does not indicate a plausible future harm or a hazard scenario but an actual realized harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

车企莫把“辅助驾驶”夸大为“自动驾驶”

2021-08-15
china.org.cn/china.com.cn(中国网)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system, NOP) that was active during a fatal accident, indicating AI system use. The harm (death) has occurred, and the AI system's involvement is indirect but pivotal, as the system's limitations and misleading marketing contributed to driver overreliance and the accident. The article explicitly discusses the risks and consequences of misrepresenting assisted driving as autonomous driving, which has led to real harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡,行车数据披露

2021-08-16
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's autonomous driving features (Level 2/3 driver assistance) during the fatal crash. The AI system was active and controlling the vehicle to some extent, which directly relates to the harm caused (death of the driver). The event involves the use and possible malfunction or limitation of the AI system, leading to a fatal injury, fitting the definition of an AI Incident. The article also discusses the need for further data to analyze the cause, but the harm has already occurred and is linked to the AI system's operation.
Thumbnail Image

造车新势力消费引导应去掉浮夸

2021-08-18
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of advanced driver-assistance systems (NOP by NIO, NOA by Tesla) that are involved in fatal traffic accidents. These systems are AI-based and their malfunction or misuse has directly or indirectly led to injury and death, fulfilling the criteria for an AI Incident. The discussion about overhyped marketing and consumer misunderstanding further supports the link between AI system use and harm. The article also references regulatory responses aimed at mitigating these harms, but the primary focus is on the realized harm from AI system use, not just potential future harm or complementary information.
Thumbnail Image

从特斯拉到蔚来:可控的自动驾驶Or脱缰的野马?

2021-08-16
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (NOP and Autopilot) used in vehicles that have been involved in accidents causing injury and death. The AI systems' malfunction or limitations (e.g., inability to stop when a vehicle ahead is stationary) directly contributed to these harms. The involvement of AI in the development and use phases, and the resulting fatalities and accidents, meet the criteria for an AI Incident. The article also highlights the need for regulatory responses but focuses primarily on realized harms rather than potential future risks, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

车主车祸去世,蔚来"辅助驾驶"成焦点

2021-08-16
中关村在线
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system explicitly described as an 'automatic driving assistance' feature that was active during the accident. The failure of this AI system to detect and respond appropriately to a highway maintenance vehicle directly led to a fatal crash, causing injury and death. This meets the definition of an AI Incident because the AI system's malfunction directly caused harm to a person. The event is not merely a potential hazard or complementary information but a realized harm involving AI.
Thumbnail Image

美国监管部门加强审查涉及辅助驾驶系统的车祸

2021-08-16
The Wall Street Journal - China
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident (no particular accident or harm event is detailed), nor does it describe a plausible future harm event directly. Instead, it reports on regulatory actions responding to existing concerns and incidents involving AI driving systems. This fits the definition of Complementary Information, as it provides governance and societal response context to AI-related safety issues without describing a new incident or hazard itself.
Thumbnail Image

理想CEO呼吁统一自动驾驶名词标准 避免夸张宣传造成误解

2021-08-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (L2 driver assistance) actively used during a fatal accident, leading to the death of a person, which is a clear harm to health. The AI system's malfunction or limitations contributed directly or indirectly to the harm. The article also highlights the confusion around AI system capabilities, which is relevant but secondary to the incident itself. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来人员私自接触事故车辆被传唤:未经同意私自接触涉案车辆

2021-08-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
The autonomous driving feature of the NIO vehicle is an AI system involved in the accident that resulted in a fatality, fulfilling the criteria for harm to a person. The unauthorized access and possible tampering with vehicle data by NIO technical staff further implicate the AI system's role in the incident and legal responsibilities. The event describes realized harm and direct involvement of an AI system, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

開蔚來自動駕駛車禍逝世 陸餐飲連鎖品牌創辦人終年31歲 | 聯合新聞網:最懂你的新聞網站

2021-08-15
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving feature (NOP) of the NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person, which fits the definition of an AI Incident. The article discusses the accident's circumstances, the company's response, and the ongoing investigations, confirming the AI system's role in the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

八个月三起事故 沈海高速成蔚来"魔咒"? - cnBeta.COM 移动版

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's NOP assisted driving AI system during the accidents, including a fatal crash where the system failed to detect a maintenance vehicle, leading to death. The AI system's malfunction or limitations are directly linked to the harm (injury and death). Multiple accidents on the same highway involving the same AI system reinforce the causal link. The harms are realized and significant (fatality and serious injuries). This meets the definition of an AI Incident, as the AI system's use and malfunction have directly led to injury and death (harm to persons). The ongoing investigation does not negate the current evidence of harm and AI involvement. Hence, the classification is AI Incident.
Thumbnail Image

驾驶辅助系统致两起交通事故 蔚来股价大跌近6% - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing driver assistance. The accidents caused injury and death, which are direct harms linked to the use of this AI system. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and malfunction.
Thumbnail Image

蔚来"车毁人亡"事故背后 辅助驾驶的边界被车企忽视 - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP) that was in use at the time of a fatal accident. The harm (death of a person) directly resulted from the use and misunderstanding of this AI system's capabilities. The article details how the AI system's limitations were not adequately communicated, leading to misuse and overreliance, which is a direct causal factor in the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury and death.
Thumbnail Image

理想汽车创始人呼吁统一自动驾驶中文名词标准 - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The assisted driving systems mentioned (L2 and NOP) are AI systems that infer from sensor inputs to control or assist vehicle navigation. The misuse (lying down while driving) and the fatal accident linked to the use of such systems indicate direct harm caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident due to injury and harm to persons resulting from the AI system's use and misuse.
Thumbnail Image

蔚来致死事故后 希望"L2.5"们也跟着去世 - cnBeta.COM 移动版

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP driver assistance system) that malfunctioned by failing to detect an obstacle, directly causing a fatal collision. The system is an AI-based driver assistance system (L2 level) that requires driver supervision, but overreliance and misleading marketing contributed to the harm. The harm is realized (death), and the AI system's malfunction is a direct contributing factor. The article also discusses regulatory and responsibility issues but the core is the AI system's failure causing injury and death, fitting the definition of an AI Incident.
Thumbnail Image

知名品牌创始人驾驶蔚来ES8不幸身亡 李斌表示悼念 - cnBeta.COM 移动版

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI-based automated driving assistance system that was in use during the fatal crash. The system's involvement in the vehicle's operation directly relates to the incident causing death, which is a harm to a person. Although the system is described as an assistance rather than full autonomy, the AI system's outputs and operation contributed to the circumstances of the accident. The article also notes concerns about user misunderstanding of the system's capabilities, which is relevant to the AI system's role in the harm. Therefore, this event meets the criteria for an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

蔚来又惹祸!辅助驾驶的锅到底谁来背? - cnBeta.COM 移动版

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—NIO's assisted driving (NOP) feature—which is an AI system designed to assist driving but not fully autonomous. It discusses a recent accident linked to the use of this system, indicating realized harm related to road safety. The AI system's use directly contributed to the incident, fulfilling the criteria for an AI Incident. The article also discusses broader implications and responsibility issues but the core event is a realized harm caused by the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ES8自动辅助驾驶车祸致死 蔚来:NOP不是自动驾驶 正在调查 - cnBeta.COM 移动版

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing driver assistance but not full autonomy. The fatal crash occurred while the system was active, indicating the AI system's involvement in the event. The harm (death and severe injury) has occurred, fulfilling the criteria for an AI Incident. The manufacturer's clarification that NOP is not autonomous and requires driver attention does not negate the AI system's role in the incident. Therefore, this event qualifies as an AI Incident due to the direct or indirect role of the AI system in causing harm.
Thumbnail Image

蔚来车主对NOP系统认知发表联合声明 已超500名车主签署 - cnBeta.COM 移动版

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (NP/NOP driver-assistance systems) and their use, but it does not report any realized harm or plausible future harm caused by these systems. Instead, it focuses on users' understanding and public communication to correct misconceptions. This fits the definition of Complementary Information, as it provides contextual and societal response information related to AI systems without describing a new incident or hazard.
Thumbnail Image

事故方和蔚来的持续对峙,正在破坏蔚来用户企业的形象 - cnBeta.COM 移动版

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves a fatal car crash caused while using NIO's assisted driving system, which is an AI system by definition. The harm (death of a person) has occurred and is directly linked to the AI system's use. The article also discusses the company's response and user reactions, but the core event is the accident caused by the AI system's operation. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person.
Thumbnail Image

蔚来"自动驾驶致死"台前幕后:对未成熟的致命诱惑,车厂和车主都太乐观了 - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system as it uses sensor fusion and AI algorithms for assisted driving. The accident directly resulted in a fatality, fulfilling the harm criterion. The AI system's limitations in obstacle detection and the activation of the system at the time of the crash indicate the AI system's involvement in the harm. The event also discusses the misuse or overreliance on the system by the driver and the manufacturer's role in communication and safety warnings. Therefore, this is an AI Incident due to direct harm caused by the AI system's use and limitations.
Thumbnail Image

蔚来ES8自动辅助驾驶车祸:网友都忘了现场还有辆撞翻的工程车? - cnBeta.COM 移动版

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system, specifically an assisted driving feature (NOP intelligent pilot system) in the NIO ES8 vehicle. The system is described as a driver assistance tool, not full autonomous driving, but it was active at the time of a fatal crash. The crash caused injury and death (harm to a person), fulfilling the harm criteria for an AI Incident. The AI system's involvement is direct as it was engaged during the accident, and the article discusses the system's limitations and the need for driver oversight. Although other factors like the highway maintenance vehicle's behavior contributed, the AI system's use is a contributing factor to the harm. Hence, this event meets the definition of an AI Incident.
Thumbnail Image

蔚来自动辅助驾驶车祸致死 前公关总监犀利嘲讽李想 - cnBeta.COM 移动版

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (assisted driving/autonomous driving technology) that has directly led to a fatal accident, which constitutes injury or harm to a person. The article explicitly references the crash caused by NIO's assisted driving system, thus meeting the criteria for an AI Incident. The discussion about terminology standardization and public reactions are complementary but the core event is the fatal crash caused by the AI system's use.
Thumbnail Image

蔚来最大危机:连发两起致死事故,多位车主称开自动辅助出车祸 - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as automatic driving assistance features (NOP and NP) that influence vehicle control and navigation. The accidents, including a fatality, occurred while these AI systems were active, and the system's failure to detect obstacles or the drivers' overreliance on the system contributed to the crashes. This directly led to harm to persons, fulfilling the criteria for an AI Incident. The article also discusses the broader implications for safety, insurance, and public trust, reinforcing the significance of the harm caused by the AI system's use and malfunction.
Thumbnail Image

蔚来自动辅助驾驶事故致死 网上仍有"方向盘配重环"售卖 - cnBeta.COM 移动版

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The intelligent driver-assistance systems qualify as AI systems because they perform autonomous driving tasks such as automatic following, lane keeping, and lane changing, relying on sensor data and AI algorithms. The event involves the misuse of these AI systems by using counterweight rings to deceive the system into believing the driver is holding the wheel, enabling dangerous hands-free driving. This misuse has directly led to fatal accidents, fulfilling the criteria for harm to persons. The event also highlights ongoing sales of these devices despite legal prohibitions and safety risks, emphasizing the direct and significant harm caused by the AI system's misuse. Hence, the event is classified as an AI Incident.
Thumbnail Image

离解放双手还很远 警惕自动驾驶成"马路杀手" - cnBeta.COM 移动版

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of automated driving assistance (e.g., NOP, Autopilot) that were active during fatal or serious accidents. The harms include death and injury, fulfilling the criteria of harm to persons. The AI systems' malfunction or misuse is a direct contributing factor to these harms. The article also discusses systemic issues such as overmarketing and lack of user education, which contribute indirectly to harm. Hence, the event is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"自动驾驶"事故频发,蔚来深陷"特斯拉"同款困境 - 视点·观察 - cnBeta.COM

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions accidents caused by autonomous driving AI systems, including failures to detect obstacles and loss of control, leading to crashes and injuries. It also discusses the misleading marketing that causes users to over-rely on these systems, increasing risk. These factors demonstrate direct and indirect harm to human health and safety caused by AI system use and malfunction. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

林某钦车祸案代理律师:警方传唤蔚来人员 因其私自接触事故车辆 - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving an AI system (NIO's autonomous driving feature). The AI system was in use at the time of the accident, directly leading to harm (death). The unauthorized access by NIO personnel to the vehicle and potential data tampering further implicates the AI system's role in the incident and legal responsibility. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person, and the investigation concerns the AI system's data integrity and liability.
Thumbnail Image

蔚来L2致死事故背后:为什么就是避不开静止车辆? - IT 与交通 - cnBeta.COM

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (NIO Pilot/NOP, an L2 autonomous driving assistance system) in the accident. The system's failure to detect a slow or stationary vehicle directly caused the collision and fatal injury, which is a clear harm to a person. The AI system's malfunction (inability to recognize the obstacle) is central to the incident. Therefore, this event meets the definition of an AI Incident because the AI system's use and malfunction directly led to harm.
Thumbnail Image

启用自动驾驶的蔚来ES8发生事故 美一好创始人林文钦在不幸身亡 - IT 与交通 - cnBeta.COM

2021-08-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the automated driving feature of the NIO ES8. The use of this AI system directly led to a fatal accident, causing injury and death, which fits the definition of an AI Incident. The article describes realized harm (death) linked to the AI system's use, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家车祸去世谁之责?车主好友:希望尽快拿到行车数据 - IT 与交通 - cnBeta.COM

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the NIO Pilot's NOP, an advanced driver-assistance AI system. The system was active during the accident, which resulted in a fatality, thus causing harm to a person. The incident is directly linked to the use of the AI system, including concerns about its capabilities, limitations, and the driver's reliance on it. The family's demand for data to determine whether the AI system or driver actions contributed to the crash further confirms the AI system's involvement. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

车企莫把"辅助驾驶"夸大为"自动驾驶" - IT 与交通 - cnBeta.COM

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of assisted driving technologies (Level 2 driver assistance systems) that are being misrepresented as fully autonomous driving systems. This misrepresentation leads to indirect harm by causing drivers to overtrust the system and reduce vigilance, which can result in accidents and harm to life and property. Although no specific accident is detailed, the article references ongoing investigations and the potential for harm due to misuse and misunderstanding of AI capabilities. Therefore, this situation constitutes an AI Incident because the AI system's use and the associated misunderstanding have directly or indirectly led to safety risks and potential harm. The article also discusses regulatory responses, but the primary focus is on the harm and risk caused by the AI system's use and misrepresentation.
Thumbnail Image

林文钦亲属:将开始对蔚来事故车辆进行鉴定和数据提取 - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The autonomous driving system qualifies as an AI system as it infers from input to generate driving decisions. The accident involving the NIO ES8 and the subsequent investigation involving data extraction from the AI system's event and autonomous driving data recorders directly relates to harm (a car accident causing injury or death). The article focuses on the use of AI system data to clarify the incident and assign responsibility, indicating the AI system's role in the harm. Therefore, this is an AI Incident.
Thumbnail Image

起底蔚来车祸:辅助驾驶让人轻松,而不该是放松 - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot's NOP assisted driving system, which is a Level 2 automated driving aid relying on AI for navigation and lane changes. The fatal accident was caused by the system's failure to detect a slow-moving road vehicle and the driver's overreliance on the system, leading to a rear-end collision and death. This constitutes direct harm to a person caused by the use and malfunction of an AI system. The article also discusses the broader context of AI-assisted driving risks and regulatory responses, but the core event is a realized harm directly linked to AI system use and malfunction, fitting the definition of an AI Incident.
Thumbnail Image

沈晖谈自动驾驶:L2级别责任主体在驾驶员 L4以上责任主体归属主机厂 - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article references an AI system (autonomous driving functions at L2 and L4 levels) and a past incident involving an L2-level system. However, it does not report a new incident or hazard but rather discusses responsibility and safety considerations. This fits the definition of Complementary Information, as it provides context and updates related to AI incidents and hazards without describing a new harm or plausible future harm event.
Thumbnail Image

蔚来回应技术人员私自接触涉案车辆:无删改数据行为,员工未被警方传唤 - 最新消息 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Although the vehicle involved likely contains AI systems (e.g., battery management, autonomous or assisted driving features), the article focuses on the investigation and data handling process without indicating that AI system malfunction or misuse caused the accident or harm. There is no indication of AI-related harm or risk beyond the accident itself, which is under investigation. The company's statement about no data tampering and cooperation with authorities is an update on the ongoing situation, not a new incident or hazard. Therefore, this is Complementary Information providing context and updates on an existing situation involving AI systems, but not reporting a new AI Incident or AI Hazard.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件:已开展调查 将依法做出责任认定 - 最新消息 - cnBeta.COM

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO autonomous driving) that has been involved in a traffic accident, which is a direct or indirect harm to persons or property. The police investigation and responsibility determination relate to the AI system's use and potential malfunction or misuse. Therefore, this qualifies as an AI Incident because harm has occurred or is being investigated as a result of the AI system's involvement.
Thumbnail Image

一蔚来车主不认可"蔚来车主声明":不想被别人代表 - 人物 - cnBeta.COM

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO's autonomous driving feature) during a traffic accident that caused death and injury, which is a direct harm to persons. The AI system's use is explicitly mentioned and is central to the incident. Although the exact cause is still under investigation, the AI system's involvement in the fatal crash meets the criteria for an AI Incident as per the definitions provided. The article also discusses related social reactions but the primary focus is the accident involving AI use leading to harm.
Thumbnail Image

三问"自动驾驶":过度营销?安全边界?车主教育? - IT 与交通 - cnBeta.COM

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (L2 assisted driving systems with AI-based perception and control). The use and misuse of these systems have directly led to harm, including a fatal accident. The article details how misleading marketing and insufficient user education contribute to misuse and overreliance, which are indirect causes of harm. Therefore, this qualifies as an AI Incident because the AI system's use and the surrounding practices have directly and indirectly caused harm to people. The discussion about safety boundaries and user education further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

爱驰汽车创始人谈蔚来事故:特斯拉也未到达自动驾驶状态 反对过度营销 - IT 与交通 - cnBeta.COM

2021-08-18
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article references a fatal accident involving a NIO vehicle with assisted driving features, which is an AI-related incident previously known. The founder's comments focus on the industry's response, the current limitations of autonomous driving, and the need for improved safety and regulation. Since the article mainly provides opinions and reflections on the incident and the state of AI driving technology without reporting a new incident or hazard, it fits the definition of Complementary Information. It enhances understanding of the AI ecosystem and responses but does not itself describe a new AI Incident or AI Hazard.
Thumbnail Image

自动辅助驾驶事故致车主身亡 此前蔚来副总裁曾演示车内吃饭 - IT 与交通 - cnBeta.COM

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automatic driver assistance) whose use directly led to a fatal accident, causing injury and death (harm to a person). The system's malfunction or misuse is central to the incident. The article explicitly connects the AI system to the harm and discusses the risks of overreliance on such systems. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

网约车司机的"职业生涯"只有五年了? - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver assistance system (NOP) that controls vehicle navigation and driving functions. The fatal accident occurred while the system was engaged, directly leading to the death of a person, which is a clear harm to health. The article also mentions previous accidents involving the same system, reinforcing the link between the AI system's use and harm. Although the system is not fully autonomous driving, it still qualifies as an AI system under the definition because it infers from input to generate driving outputs influencing the physical environment. Hence, this is an AI Incident due to direct harm caused by the AI system's malfunction or failure.
Thumbnail Image

林文钦购车前曾多次询问"自动驾驶" 是其购买蔚来的理由之一 - 人物 - cnBeta.COM

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving a vehicle equipped with an AI-based driver assistance system. The deceased had inquired about 'automatic driving' before purchase, indicating reliance on the AI system's capabilities. The manufacturer clarifies that the system is not full autonomous driving but a navigation assistance feature. The accident caused death, a direct harm to a person, and the AI system's involvement is central to the incident. Hence, it meets the criteria for an AI Incident due to the direct harm caused linked to the AI system's use.
Thumbnail Image

媒体实测蔚来AEB自动辅助驾驶:测试成绩一言难尽 - IT 与交通 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The AEB system is an AI system designed to detect obstacles and automatically brake to prevent collisions. The article reports on a fatal accident involving NIO's assisted driving system and test results demonstrating the system's failure to prevent collisions, which directly relates to harm to persons and property. The AI system's malfunction or limitations have directly led to injury and death, fulfilling the criteria for an AI Incident. The discussion of the system's technical limitations and warnings about the need for driver vigilance further support the conclusion that the AI system's use has caused harm, not just a potential hazard or complementary information.
Thumbnail Image

蔚来事件引发的思考:人工智能不应是"无人"智能 - IT 与交通 - cnBeta.COM

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology) whose use has directly led to a fatal accident, constituting harm to a person. The article explicitly links the AI system's operation (or potential malfunction) to the incident and discusses similar past incidents, indicating realized harm. Therefore, this qualifies as an AI Incident. The article also provides broader reflections and governance considerations, but the primary focus is on the fatal accident caused by AI system use, which meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来高速自动辅助驾驶致死:有EC6车主曝光自己惊魂一刻 - IT 与交通 - cnBeta.COM

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving AI feature (NOP navigation assist) that malfunctioned by failing to detect a hazard and not slowing down, directly creating a dangerous situation. The driver had to intervene to prevent harm, indicating the AI system's failure to act as intended. Since no actual harm occurred but a credible risk of serious injury or death was narrowly avoided, this qualifies as an AI Hazard under the framework, as the AI system's malfunction could plausibly lead to an AI Incident (fatal accident).
Thumbnail Image

不要尝试挑战人性,放弃自动驾驶幻想 - IT 与交通 - cnBeta.COM

2021-08-17
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of advanced driver-assistance and semi-autonomous driving features (e.g., NIO's Navigate on Pilot and Tesla's Autopilot) that have been linked to multiple accidents causing physical harm and safety risks. It discusses the systems' technical limitations and human behavioral challenges that contribute to these harms. The involvement of AI in these incidents is clear, and the harms have materialized, meeting the criteria for an AI Incident. The article also mentions regulatory investigations and industry responses, but the primary focus is on the realized harms from AI system use and malfunction.
Thumbnail Image

31岁企业家命丧蔚来汽车,是谁惹的祸? - IT 与交通 - cnBeta.COM

2021-08-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named (NIO Pilot's Navigate on Pilot), which is an AI-based driver assistance system. The accidents described have directly led to fatalities and injuries, fulfilling the harm criteria (a) injury or harm to persons. The AI system's malfunction or limitations in obstacle detection and the insufficient communication of these limitations to users contributed to the incidents. Hence, the AI system's use and malfunction are directly linked to the harm. This meets the definition of an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is clearly attributable to the AI system's role.
Thumbnail Image

网友再曝有特斯拉车主高速公路自动驾驶时玩《王者荣耀》 - 警告! - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based driver assistance systems (NIO's Navigate on Pilot and Tesla's Autopilot) and highlights dangerous misuse (playing games while the system controls the vehicle). The reported crash with severe damage and the known history of accidents involving such systems indicate direct or indirect harm caused by the AI system's use or misuse. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to AI system use and malfunction/misuse.
Thumbnail Image

蔚来自动辅助驾驶致死 车主购车推荐人发声:极度自责 蔚来请不要套路 - 人物 - cnBeta.COM

2021-08-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article describes a fatal accident involving NIO's driver assistance system, which is an AI system designed to assist driving. The death of a person is a direct harm to health. The ongoing investigation and data extraction issues highlight the AI system's involvement and potential malfunction or failure to prevent the accident. The event meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm (death).
Thumbnail Image

自动驾驶汽车为何失控?元凶可能是它

2021-08-18
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the deep neural network-based lane centering assistance system in partially autonomous vehicles. The research shows that adversarial physical attacks can cause the AI system to malfunction, leading to vehicle collisions, which constitute harm to people and property. The harm is realized and direct, not just potential. Hence, this event meets the criteria for an AI Incident as the AI system's malfunction directly led to harm (vehicle collisions and associated safety risks).
Thumbnail Image

蔚来再回应“萌剑客”车祸:已安排专业团队赴闽 李斌表示哀悼

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based driver assistance feature that was active during the accident. The event involves the use of this AI system leading directly to a fatal traffic accident, which constitutes injury or harm to a person. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (death). The company's response and investigation are ongoing, but the harm has already occurred.
Thumbnail Image

特斯拉蔚来同陷漩涡!还是比亚迪有远见!08-17

2021-08-17
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as Tesla's Autopilot and NIO Pilot, which are advanced driver-assistance systems involving AI technologies like cameras, radar, and navigation integration. Multiple accidents involving these systems have resulted in fatalities and injuries, fulfilling the harm criteria. The regulatory investigation into Tesla's Autopilot system further confirms the AI system's involvement in harm. The article also discusses misleading marketing and inadequate user education, which are indirect factors contributing to harm. Hence, this is a clear AI Incident involving direct and indirect harm caused by AI system use and malfunction.
Thumbnail Image

终于真相大白!原来是蔚来的电动车出的大事!8-

2021-08-15
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an automatic driving feature (an AI system) in the NIO ES8 vehicle at the time of the accident, which resulted in a fatality. This directly links the AI system's use to harm to a person, fulfilling the criteria for an AI Incident. The involvement of the AI system is clear, and the harm is realized, not just potential. Hence, it is classified as an AI Incident.
Thumbnail Image

8月14日晚,一则有关“美一好创始人林文钦车祸

2021-08-15
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automatic driving feature) in use at the time of a fatal traffic accident causing the death of the driver. The harm (death) is realized and directly linked to the use of the AI system. Although the investigation is ongoing, the AI system's involvement is explicit and central to the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm to a person caused by the use of an AI system.
Thumbnail Image

自动驾驶出问题让一些新能源车股价大跌完全与比

2021-08-16
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automated driving system (NOP) in a NIO vehicle. The system's use directly led to a fatal accident, causing harm to a person, which meets the criteria for an AI Incident. The article also discusses the broader context of AI system development, marketing, and safety concerns, but the primary focus is on the realized harm caused by the AI system's malfunction or limitations. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

华为苏箐:高阶自动驾驶引进后交通事故可能更严

2021-08-15
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The content involves AI systems (autonomous driving technology) and discusses the potential for future harm (more severe traffic accidents) due to overreliance on such systems. Since no actual harm or incident has occurred yet, but there is a credible risk of harm in the future, this qualifies as an AI Hazard. The article does not describe a realized incident, nor does it focus on responses or updates to past events, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential impact.
Thumbnail Image

美一好创始人林文钦去世:驾驶蔚来ES8自动驾驶发生车祸

2021-08-14
金融界网
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is an AI system as it infers from sensor inputs to control the vehicle's navigation and operation. The accident occurred while the autonomous driving feature was enabled, indicating the AI system's involvement in the incident. The harm (death) is a direct consequence of the AI system's use or malfunction, qualifying this event as an AI Incident under the framework.
Thumbnail Image

新能源车企们的“自动驾驶” 究竟有多危险?

2021-08-18
hk.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving function (NOP) in a NIO electric vehicle. The system's use directly led to a fatal traffic accident, fulfilling the criteria for an AI Incident as it caused injury and death (harm to a person). The article also discusses the broader context of AI system limitations, misleading marketing, and regulatory challenges, but the core event is the fatal accident caused by the AI system's operation. Hence, the classification is AI Incident.
Thumbnail Image

交警通报蔚来自动驾驶事件:林某某追尾货车后当场死亡

2021-08-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an 'automatic driving' feature (autonomous driving) in a NIO vehicle, which qualifies as an AI system. The fatal crash caused by the vehicle rear-ending a truck while the AI system was active directly caused the death of the driver, fulfilling the harm criterion. The police investigation and official statements confirm the incident's seriousness and the AI system's involvement. Hence, this is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

企业家驾蔚来发生车祸离世!好友质疑“自动驾驶” 蔚来回应称系辅助驾驶

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO ES8's assisted driving system (NOP), which is an AI-based driver assistance technology. The use of this system directly preceded and contributed to a fatal accident, causing harm to a person. The system's role is pivotal as the driver was using the AI-assisted driving mode at the time of the crash, and there is an investigation into possible data tampering by the manufacturer, indicating issues related to the AI system's use and post-incident handling. This meets the criteria for an AI Incident because the AI system's use directly led to injury or harm to a person.
Thumbnail Image

早盘内参:财政部将出台系列政策对新能源汽车等予以支持

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the Navigate on Pilot (NOP) driving assistance feature, which is an AI-enabled driver assistance system. The use of this system directly led to a fatal traffic accident causing the death of the driver, which is a clear harm to health. The AI system's malfunction or limitations are implicated as a contributing factor. Hence, this is an AI Incident as per the definitions, since the AI system's use directly caused harm.
Thumbnail Image

蔚来辅助驾驶事故再起 车企过度宣传或是“元凶”

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system NOP) whose use directly led to a fatal traffic accident, constituting harm to a person. The article explicitly connects the AI system's involvement to the incident and discusses the broader context of AI-assisted driving safety risks and regulatory measures. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to injury and death. The article also includes complementary information about regulatory responses, but the primary focus is the incident and its causes.
Thumbnail Image

蔚来回应“员工私自接触事故车被传唤”:没有删改数据

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an autonomous vehicle, which is an AI system. The harm (death) has occurred, and the AI system's involvement is central. Although the company denies data tampering or employee misconduct, the accident itself is an AI Incident due to the direct harm caused by the AI system's use or malfunction.
Thumbnail Image

蔚来声明:没有删改数据/没有员工被传唤

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred to be involved because the vehicle is a NIO electric car, which typically includes AI systems for autonomous or assisted driving functions. The event involves the use and investigation of data from the vehicle's systems after a fatal accident. However, the article does not report any malfunction or misuse of the AI system causing the accident or harm, nor does it indicate any violation or disruption caused by AI. The statement is a response to rumors and clarifies the company's position, with no new harm or plausible future harm introduced. Therefore, this is Complementary Information providing context and updates on an ongoing investigation involving AI systems in the vehicle, but not reporting a new AI Incident or AI Hazard.
Thumbnail Image

聚焦四大争议!31岁企业家驾蔚来车祸离世 祸因“自动驾驶”?谁之责?

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NOP assisted driving feature, which is an AI-based driver assistance system. The use of this system is directly linked to a fatal accident causing the death of the driver, which is a clear harm to a person. The investigation into the accident and the role of the AI system in causing or contributing to the crash is ongoing, but the AI system's involvement is central. This meets the definition of an AI Incident because the AI system's use has directly led to injury or harm to a person. The event is not merely a potential hazard or complementary information; it reports a realized harm involving AI.
Thumbnail Image

31岁知名企业家驾驶蔚来出车祸去世 警方通报

2021-08-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an advanced driver assistance system (NOP) that was active during the fatal crash. The harm (death and injury) has materialized, and the AI system's use is directly linked to the incident. Although the system is not full autonomous driving, it is an AI-based assistance feature that influenced vehicle operation. Hence, this is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

蔚来ES8车祸定责待调查 “自动驾驶”滥用引行业深思

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system NOP) whose use during driving directly led to a fatal accident, causing harm to a person. The article details the system's technical limitations, the risk of misuse or misunderstanding by consumers, and ongoing investigations into responsibility. The AI system's malfunction or limitations contributed to the harm, meeting the criteria for an AI Incident. The article also discusses regulatory responses and industry challenges, but the primary focus is the incident and its implications, not just complementary information or future hazards.
Thumbnail Image

蔚来辅助驾驶车祸案律师称技术人员私自接触涉案车辆 蔚来回应

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car accident involving an AI system (NOP, an assisted driving AI system). The AI system's use is directly linked to the incident, and the investigation includes concerns about unauthorized access to vehicle data, which could affect the understanding of the AI system's role. The harm (death) has occurred, and the AI system's involvement is direct and material. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and potential malfunction or misuse have directly led to harm.
Thumbnail Image

15天内两起致命事故 蔚来汽车不安全?还是自动驾驶背锅?

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP automatic driving assistance) whose use during driving directly preceded a fatal accident, indicating the AI system's involvement in causing harm. Although the system is classified as Level 2 (driver-assist, not full autonomy), the accident occurred while the AI system was active, and the article raises questions about the safety and responsibility of such AI systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (fatalities).
Thumbnail Image

蔚来技术人员被曝私自接触涉案车辆 车辆数据被篡改或毁灭?

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOP, an advanced driver-assistance system) that was active during a fatal accident, which constitutes harm to a person. The unauthorized access and potential tampering with vehicle data by the manufacturer's technical staff further implicate the AI system's role in the incident and raise legal and ethical concerns. The direct link between the AI system's use and the fatal harm, combined with the alleged data interference, meets the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely a potential risk but involves realized harm and ongoing investigation of misconduct related to the AI system.
Thumbnail Image

新鲜早科技丨蔚来高速自动辅助驾驶致死 深圳召开企业座谈会关注亚马逊“封店潮”事件

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO Navigate on Pilot (NOP) assisted driving feature, which is an AI-based driver assistance technology. The use of this AI system directly led to a fatal traffic accident, causing harm to a person. The article clearly states the involvement of the AI system in the incident, fulfilling the criteria for an AI Incident under the OECD framework, as it caused injury or harm to a person through its use.
Thumbnail Image

又是"自动驾驶"的锅?蔚来车祸后特斯拉又遭美监管机构调查!股价大跌超4% - 东方财富网

2021-08-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Tesla's Autopilot and NIO's NOP driver-assistance technologies. These systems are AI-based, performing complex real-time driving decisions. The reported crashes, injuries, and death are direct harms linked to the use or misuse of these AI systems. The ongoing investigations by regulatory authorities further confirm the causal link between AI system use and harm. The article also discusses issues of misleading marketing and user misunderstanding, which contribute indirectly to harm. Given the realized physical injuries and fatalities, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

又出事!31岁蔚来车主自动驾驶车祸去世 曾创多个知名品牌!网友:“辅助驾驶”不是“自动驾驶”

2021-08-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's NIO Pilot is an AI-based assisted driving system that supports over 20 driving assistance functions and uses AI chips for autonomous features. The accident occurred while this system was engaged, and the driver died as a result. This clearly meets the definition of an AI Incident because the AI system's use directly led to injury and death. The article also highlights the distinction between assisted driving and full autonomous driving, emphasizing the system's limitations, but the harm has already occurred. Hence, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

知名企业家“萌剑客”车祸去世 车型蔚来ES8 或因领航辅助?车企回应:正在调查

2021-08-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP driver assistance system) actively used during the fatal accident. The harm is realized (death of the driver), and the AI system's role is pivotal as it was engaged at the time of the crash. The event meets the criteria for an AI Incident because the AI system's use directly led to injury or harm to a person. Although the investigation is ongoing, the presence and use of the AI system in the incident and the resulting fatality justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

李想点评蔚来自动辅助驾驶车祸 蔚来前公关总监犀利回应

2021-08-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's automatic driving feature is an AI system involved in real-time decision-making for vehicle control. The fatal accident directly resulted from the use of this AI system, causing harm to a person, which meets the criteria for an AI Incident. The discussion about terminology and public responses are complementary but do not change the classification. The mercury issue is unrelated to AI and does not affect the classification of the main event.
Thumbnail Image

剑指蔚来NOP 知名品牌称创始人因开启辅助驾驶遭遇车祸而逝世 - 东方财富网

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The NIO NOP system is an AI system providing automated driving assistance, including navigation, speed adjustment, lane changes, and overtaking. The reported fatal accident occurred while the driver had this AI system activated, leading to the driver's death. This constitutes direct harm to a person caused during the use of an AI system, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information, as the harm has already occurred and is linked to the AI system's use.
Thumbnail Image

不幸!31岁蔚来车主开启自动驾驶后车祸去世!蔚来回应 自动驾驶概念要凉?

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO Pilot, an AI-based assisted driving system) whose use directly preceded a fatal car accident, resulting in the death of the driver. This meets the definition of an AI Incident because the AI system's use directly led to harm to a person. The article also discusses regulatory responses and company statements, but the core event is the fatal accident linked to the AI system's use. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来再发死亡事故!31岁车主“自动驾驶”车祸去世 生前曾创多个品牌 蔚来回应

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's Navigate on Pilot, an advanced driver-assistance system) whose use directly led to a fatal traffic accident, causing injury and death. The AI system's role is pivotal as it was engaged during the accident, and the harm (death) is realized. The incident fits the definition of an AI Incident because it involves harm to a person caused directly or indirectly by the AI system's use. The company's clarification that NOP is not full autonomous driving does not negate the AI involvement or the harm caused. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

痛心!31岁企业家"自动驾驶"车祸去世曾创立多个知名品牌!蔚来回应:正在调查 - 东方财富网

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot's Navigate on Pilot (NOP) assisted driving system, which is an AI-based driver assistance technology. The fatal accident occurred while this AI system was engaged, directly leading to the death of the driver. This is a clear case where the AI system's use has directly led to harm to a person, meeting the definition of an AI Incident. The article also references multiple other accidents involving NIO vehicles and discusses the safety and regulatory concerns around AI-assisted driving, reinforcing the AI system's pivotal role in the harm. Therefore, the classification as an AI Incident is justified.
Thumbnail Image

31岁连锁品牌创始人驾驶蔚来ES8车祸逝世 事前开启辅助驾驶功能 蔚来回应正在调查 无人驾驶概念或受冲击?

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's Navigate on Pilot assisted driving system) that was activated before the fatal crash. The harm (death) directly resulted from the use of this AI system in a real-world scenario. The AI system's involvement is clear and central to the incident, meeting the criteria for an AI Incident due to injury or harm to a person caused directly or indirectly by the AI system's use or malfunction. The article's detailed description of the accident, the system's role, and the ongoing investigation supports this classification.
Thumbnail Image

企业家驾蔚来车祸离世 工作人员回应辅助驾驶:手不能离方向盘

2021-08-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing driver assistance. The accident causing the driver's death is a direct harm linked to the use of this AI system. Although the company states it is not full autonomous driving, the AI system's involvement in the accident is clear. Therefore, this qualifies as an AI Incident due to direct harm to a person caused by the use of an AI system in a vehicle.
Thumbnail Image

31岁男子驾驶蔚来ES8车祸逝世 蔚来回应正在调查 无人驾驶板块或受冲击?

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system, the NIO Pilot automatic driving assistance, which is an AI-based system developed by NIO. The driver activated this system before the fatal crash, indicating the AI system's involvement in the incident. The harm is realized and severe (death of the driver). Although the investigation is ongoing, the AI system's role in the accident is pivotal and directly linked to the harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

乱象频出 OTA被迫戴上“紧箍咒”

2021-08-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in intelligent electric vehicles, including OTA software updates and autonomous driving features. It reports actual incidents where these AI systems caused or contributed to safety hazards, reduced vehicle performance without consent, and even fatal accidents, fulfilling the criteria for AI Incidents due to direct harm to persons and property. The regulatory measures described are responses to these incidents, not the primary event. Hence, the classification is AI Incident.
Thumbnail Image

蔚来ES8事故的启示 自动驾驶还需自己手动!

2021-08-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an AI-based assisted driving system) actively used during a fatal traffic accident causing the death of a person, which constitutes injury or harm to health. The AI system's malfunction or limitations and the user's overreliance on it are central to the incident. The article also highlights the regulatory and safety implications, but the primary focus is the realized harm caused by the AI system's use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁创业者使用辅助驾驶功能车祸致死:蔚来强调"NOP不是自动驾驶" - 东方财富网

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing assisted driving functions. The accident happened while the vehicle was in NOP mode, indicating the AI system was in use. The fatal injury to the driver is a direct harm caused during the use of this AI system. The event meets the criteria for an AI Incident as it involves direct harm to a person resulting from the use of an AI system. Although the investigation is ongoing, the involvement of the AI system in the fatal crash is clear and central to the event.
Thumbnail Image

31岁蔚来车主车祸去世,曾创多个知名品牌

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of the NIO Pilot automatic driving system (an AI system) during the accident. The driver died as a result of the crash while the AI-assisted driving feature was active, indicating direct harm caused by the AI system's use or malfunction. This meets the criteria for an AI Incident, as the AI system's use directly led to injury and death. Although the company states that the feature is not fully autonomous driving, the system is an AI-based driver assistance system, and its involvement in the fatal accident is clear. Hence, the event is classified as an AI Incident.
Thumbnail Image

自动驾驶事故争议之际:百度高调推出汽车机器人 没有方向盘和踏板

2021-08-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system: Baidu's L5 autonomous driving vehicle operating without human control. The use of this system on public roads without a driver contravenes existing regulations and poses plausible risks of harm, including accidents or fatalities, as evidenced by recent incidents with other companies' assisted driving systems. However, no actual harm or accident involving Baidu's vehicle is reported in the article. The focus is on the potential for harm due to regulatory non-compliance and the early stage of technology maturity, which fits the definition of an AI Hazard (plausible future harm). It is not Complementary Information because the article is not primarily about responses or updates to past incidents but about the announcement and demonstration of a potentially risky AI system. It is not an AI Incident because no realized harm is described.
Thumbnail Image

知名企业创始人车祸去世!一个月内两位蔚来汽车驾驶人事故身亡

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's assisted driving AI system (NOP) during the fatal accidents, which directly caused harm to persons (deaths) and property (vehicle and infrastructure damage). The AI system's malfunction or limitations in the assisted driving feature contributed to the incidents. The harm is realized and significant, meeting the criteria for an AI Incident. The article also notes ongoing investigations but does not indicate that harm was averted or only potential, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI system involvement is clear and central to the event.
Thumbnail Image

蔚来汽车频陷安全风波 L2级自动驾驶暴露识别认知缺陷

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NIO Pilot L2-level automated driving system with NOP functionality) during the fatal accidents. The system's failure to detect static obstacles and the resulting collisions directly caused harm (fatalities). The accidents are described as involving the AI system's malfunction or limitations, fulfilling the criteria for an AI Incident. The harm is realized (deaths), and the AI system's role is pivotal in the chain of events leading to the harm. Hence, the classification as AI Incident is justified.
Thumbnail Image

蔚来事故发生时 最高车速达到114.6公里/小时

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The Nio Pilot and NOP systems are AI systems providing autonomous driving assistance, including speed control, lane keeping, and navigation. The accident occurred while these systems were active, and a fatality resulted, which constitutes injury or harm to a person. The AI system's use and possible malfunction or limitations contributed directly or indirectly to the harm. Therefore, this event qualifies as an AI Incident under the framework.
Thumbnail Image

一个月内两位蔚来汽车驾驶人事故身亡 “NOP自动辅助”要背锅吗?

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's assisted driving AI system (NOP) at the time of fatal accidents, which directly caused harm (deaths and vehicle damage). The AI system's involvement is clear and central to the incident. The harm is realized, not potential. Although the company states NOP is not full autonomous driving, the system is an AI-based assisted driving feature whose malfunction or misuse plausibly contributed to the accidents. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来车主车祸去世 曾创立多个知名品牌

2021-08-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The autonomous driving system (an AI system) was activated and in use at the time of the accident, which directly led to the fatal injury of the driver. The event involves the use and malfunction or failure of the AI system to prevent harm, fulfilling the criteria for an AI Incident under the definition of injury or harm to a person caused directly or indirectly by an AI system.
Thumbnail Image

从特斯拉到蔚来:如何逃离自动驾驶“死亡魔咒”?

2021-08-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance systems (NOP, NOA, NGP) that use AI for navigation and vehicle control. It details multiple accidents where these AI systems were active and contributed to collisions with stationary vehicles, resulting in deaths and injuries. This meets the definition of an AI Incident because the AI system's use directly led to harm to persons. The article also discusses ongoing investigations and regulatory measures, but the primary focus is on the realized harms caused by the AI systems' malfunction or limitations, not just potential future risks or complementary information.
Thumbnail Image

蔚来车主自动驾驶车祸行驶数据公布:只出现一次急加速现象 无急减速现象

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NIO Pilot autonomous driving and NOP navigation) during the trip that ended in a fatal accident. The involvement of the AI system in the vehicle's operation and the resulting death constitute direct harm to a person caused by the AI system's use. Therefore, this event meets the criteria for an AI Incident as per the definitions provided.
Thumbnail Image

蔚来的"特斯拉式"困局_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (NIO Pilot, Tesla Autopilot) involved in fatal crashes causing loss of life, which is a direct harm to persons. The AI systems' malfunction or limitations (e.g., inability to detect static obstacles) are central to the incidents. The involvement of AI in these accidents is direct and causal, meeting the definition of an AI Incident. The article does not merely discuss potential risks or future hazards but reports actual incidents with fatalities linked to AI system use. Therefore, the event is classified as an AI Incident.
Thumbnail Image

特斯拉蔚来同陷漩涡,自动驾驶该反思的不止是话术_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Tesla Autopilot, NIO Pilot, and similar advanced driver-assistance systems) whose malfunction or limitations have directly led to fatal accidents and injuries, fulfilling the criteria for harm to persons. The investigation by NHTSA and the reported accidents in China confirm realized harm. The discussion of misleading marketing and inadequate user education indicates indirect contributions to harm through misuse or misunderstanding. Therefore, this is an AI Incident rather than a hazard or complementary information. The article's focus is on the harms caused and ongoing investigations, not just potential risks or responses.
Thumbnail Image

焦点分析 | 从特斯拉到蔚来的致死事故,是自动驾驶与人性弱点的对抗_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance systems (NIO Pilot, Tesla Autopilot) that use AI-based perception and decision-making technologies. The fatal accidents described are directly caused by the use of these AI systems in assisted driving modes, which failed to detect static or irregular obstacles, leading to collisions and deaths. The harm (fatalities) has occurred, and the AI system's malfunction or limitations are a direct contributing factor. Therefore, this event meets the criteria for an AI Incident as defined, involving injury or harm to persons due to the use and malfunction of AI systems.
Thumbnail Image

蔚来L2致死,消费者应当警惕"自动驾驶"_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of L2 automated driving assistance (NIO Pilot/NOP). The accidents resulted in fatalities, which is direct harm to persons. The AI system's role is pivotal as the accidents occurred while the AI driving assistance was active, and there is a plausible failure or limitation in the system's detection capabilities. The article also discusses the misuse or overreliance on the AI system due to misleading marketing, which contributed indirectly to the harm. Hence, the event meets the criteria for an AI Incident due to direct and indirect causation of harm by an AI system.
Thumbnail Image

蔚来汽车连续事故,自动驾驶只能残酷演进?_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based advanced driver-assistance systems (NOP) in NIO vehicles, which are explicitly described as not fully autonomous but still capable of influencing driving decisions. The fatal accidents involving these systems demonstrate direct harm to human life caused or contributed to by the AI system's use or malfunction. The article also discusses the dangers of misleading marketing and user misunderstanding, which can lead to overreliance on the AI system and consequent harm. Given the realized harm (fatalities) linked to the AI system's use, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来「渡劫」,自动驾驶穿越生死线_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP) that was engaged during the fatal accident. The system's limitations and hardware deficiencies are discussed as contributing factors to the accident. The harm (death of the driver) has occurred and is directly linked to the use and malfunction/limitations of the AI system. This fits the definition of an AI Incident, as the AI system's use directly led to injury and death. The article also discusses broader implications and safety concerns but the primary focus is on the incident itself.
Thumbnail Image

蔚来事故背后,"致命弯道"在辅助驾驶和自动驾驶之间_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP) used in the NIO ES8 vehicle. The fatal accident occurred while this AI system was active, directly causing harm (death of a person). The article also discusses the broader context of AI system misuse, user overreliance, and misleading marketing, which are factors contributing to the incident. This fits the definition of an AI Incident, as the AI system's use directly led to injury or harm to a person. The detailed discussion of regulatory and responsibility issues further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

蔚来是否会陷入"特斯拉式"危机_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO's NOP assisted driving) during a fatal traffic accident, directly causing harm to a person. The article explicitly states the vehicle was in NOP mode at the time of the crash, linking the AI system's use to the incident. The harm is realized (death of the driver), meeting the criteria for an AI Incident. The discussion of regulatory responses and company statements supports the context but does not change the classification. Hence, this is an AI Incident due to the direct involvement of an AI system in causing injury/death.
Thumbnail Image

31岁蔚来车主"自动驾驶"车祸去世,蔚来回应:正在调查_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's Navigate on Pilot, an AI-based driver assistance system) whose use directly led to a fatal traffic accident, causing the death of the driver. The AI system was active during the crash, and the harm (death) is directly linked to the AI system's operation or limitations. This meets the definition of an AI Incident as the AI system's use directly led to injury or harm to a person. The article also references multiple similar accidents involving AI-assisted driving in NIO vehicles, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

不要用自动驾驶考验人性_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (assisted driving/autonomous driving features) whose use directly led to a fatal accident (harm to a person). The article details multiple such incidents and the systemic risks of assisted driving AI systems, including user overreliance and misunderstanding, which have caused real harm. The involvement of AI in the development, use, and malfunction of these systems is explicit. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and death, fulfilling the criteria for harm to persons.
Thumbnail Image

蔚来,从领跑到跟跑_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions two fatal accidents involving NIO vehicles equipped with driver assistance (autopilot) functions, which are AI systems designed to assist or automate driving tasks. These accidents resulted in deaths, fulfilling the harm criterion. The AI system's role is pivotal as the accidents raise doubts about the safety and reliability of NIO's AI-based driving assistance. Hence, this qualifies as an AI Incident due to direct harm caused by the use or malfunction of an AI system.
Thumbnail Image

新能源车企们的"自动驾驶",究竟有多危险?_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—specifically, an automatic driving system (NOP) on a NIO ES8 vehicle. The system was active at the time of a fatal traffic accident, directly leading to harm (death of the driver). The article provides detailed evidence of the AI system's involvement and the resulting harm, fulfilling the criteria for an AI Incident. The discussion of overpromising capabilities and regulatory responses further supports the classification but does not detract from the primary incident classification. Therefore, this is an AI Incident due to the direct causal link between the AI system's use and the fatal harm.
Thumbnail Image

蔚来事故启示录:被夸大的和被误导的自动驾驶_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's automatic driving assistance system (NOP) at the time of fatal accidents, indicating the AI system's involvement in the incidents. The harm is direct and severe—loss of life due to reliance on an imperfect AI driving aid. The article also references similar incidents with Tesla and XPeng, reinforcing the pattern of risk associated with these AI systems. The discussion of regulatory measures further confirms the recognition of these harms. Hence, this is an AI Incident as the AI system's malfunction or limitations have directly led to injury and death.
Thumbnail Image

华润微:上半年净利10.68亿元,同比增164.86%_实时热点_热点聚焦_36氪快讯_36氪

2021-08-18
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically Tesla's Level 2 autonomous driving system, which is under formal investigation by a regulatory body due to safety concerns. However, the article does not report any realized harm or incident caused by the AI system; rather, it focuses on the investigation and potential solutions. Therefore, this is a case of a plausible risk being assessed and addressed, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

一个月内两位蔚来汽车驾驶人事故身亡 蔚来曾表示:辅助驾驶不等于自动驾驶

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (NIO's assisted driving/autopilot system) during the accidents. The accidents caused fatalities and severe damage, which are direct harms to persons and property. Although NIO states that the system is an assistance feature and not full autonomous driving, the AI system's involvement in the incidents is clear. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the resulting harm.
Thumbnail Image

蔚来最大危机:连发两起致死事故,多位车主称自动辅助遇事故_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of NIO's automated driving assistance (NOP and NP), which are AI systems that control vehicle navigation and driving tasks. The accidents described resulted in fatalities and injuries, directly linked to the AI system's failure to detect obstacles or the driver's overreliance on the system. This meets the definition of an AI Incident as the AI system's use and malfunction directly led to harm to persons. The article also discusses the broader implications and responses but the core event is the fatal accidents caused by AI system malfunction or misuse.
Thumbnail Image

31岁创业者驾驶蔚来车祸去世,电动车企过度宣传混淆"辅助驾驶"和"自动驾驶"?_详细解读_最新资讯_热点事件_36氪

2021-08-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot, a driver assistance AI system) that was active during a fatal crash. The harm (death of the driver) has occurred and is linked to the use and misunderstanding of the AI system's capabilities. The article discusses the confusion between 'assisted driving' and 'autonomous driving' caused by marketing language, which plausibly contributed to the incident. Since the AI system's use directly or indirectly led to harm (fatal injury), this is classified as an AI Incident rather than a hazard or complementary information. The presence of the AI system, the harm caused, and the discussion of misleading promotion all support this classification.
Thumbnail Image

蔚来"自动驾驶致死"的台前幕后:对未成熟的致命诱惑,车厂和车主都太乐观了_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving system, which was active at the time of the fatal crash. The system's failure to detect obstacles and prevent collision directly led to the death of the driver, fulfilling the criteria for an AI Incident due to injury or harm to a person caused by the use and malfunction of an AI system. The article also discusses the broader context of misleading marketing, user misunderstanding, and regulatory gaps, but the core event is a realized harm caused by the AI system's malfunction and use. Therefore, this is classified as an AI Incident.
Thumbnail Image

起底蔚来车祸:辅助驾驶让人轻松,而不该是放松_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an L2 level assisted driving system (NIO Pilot with NOP feature). The system's use directly led to a fatal traffic accident, fulfilling the criteria for an AI Incident as it caused injury and death (harm to a person). The article details how the AI system's limitations and the driver's overreliance on it contributed to the accident. This is a direct harm caused by the AI system's malfunction or misuse, not merely a potential risk or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

31岁企业家命丧蔚来背后_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP assisted driving system) that was active during a fatal traffic accident, directly linked to the harm (death of a person). The system is an AI-based advanced driver assistance system that influences vehicle control decisions. The article details the accident circumstances, the system's capabilities and limitations, and the legal and ethical implications of its use and marketing. The harm (fatal injury) has occurred, and the AI system's role is pivotal, meeting the criteria for an AI Incident. Although the system is not fully autonomous (Level 2), its use and limitations contributed to the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

私下接触事故车员工被传唤?蔚来官方否认,家属与蔚来各执一词_详细解读_最新资讯_热点事件_36氪

2021-08-16
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The NIO Pilot assisted driving system is an AI system providing autonomous or semi-autonomous driving assistance. The accident occurred while the system was active, and the driver died in the collision. The report discusses the AI system's potential failure to detect a maintenance vehicle, which may have led to the crash. This constitutes direct or indirect harm caused by the AI system's use. The involvement of the AI system in the fatal accident and the ongoing investigation into its role meet the criteria for an AI Incident, as the AI system's malfunction or use has directly or indirectly led to injury or harm to a person.
Thumbnail Image

蔚来有"鬼",刻意模糊辅助驾驶还删改事故车数据?_详细解读_最新资讯_热点事件_36氪

2021-08-17
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically an advanced assisted driving system that uses AI techniques such as sensor fusion and algorithmic decision-making. The system was active during the accident, which caused a fatality, thus fulfilling the criteria for an AI Incident due to direct harm to a person. Additionally, the unauthorized manipulation of accident data by the manufacturer’s employees raises concerns about transparency and accountability related to the AI system's role in the incident. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「自动驾驶」被滥用,一台车如何安全上路?_详细解读_最新资讯_热点事件_36氪

2021-08-18
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The NIO ES8 vehicle is described as equipped with an automated driving assistance system, which qualifies as an AI system due to its autonomous or semi-autonomous driving capabilities. The reported fatal accidents directly involve this AI system, leading to loss of life, which is a clear harm to persons. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

"自动"驾驶?别忽悠了_详细解读_最新资讯_热点事件_36氪

2021-08-18
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based advanced driver-assistance systems (ADAS) such as NIO's NOP and Tesla's Autopilot, which are AI systems designed to assist driving. The fatal accident occurred while the AI system was engaged, and the article links the harm (death and injuries) directly to the AI system's use and the users' overreliance on it. The article also references multiple prior accidents caused by similar AI systems. The harm is realized and significant (loss of life, injuries), and the AI system's malfunction or misuse is a contributing factor. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故悲剧,撞碎了自动驾驶的神话

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO ES8's driver assistance system with AI capabilities for perception and control) whose malfunction directly led to a fatal accident, causing injury and death. The system's failure to recognize a maintenance vehicle and prevent collision is a clear example of AI malfunction causing harm to a person. The presence of the AI system is explicit, and the harm is realized and severe. Despite the company's claim that the system is not full autonomous driving, the AI system was engaged and responsible for the failure. This meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

半月内两起死亡车祸 蔚来汽车的"未来"成看点 - 大纪元

2021-08-17
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The NIO Pilot (NOP) is an AI system providing driver assistance with semi-autonomous features. The article describes two fatal accidents where the AI system was in use or implicated, leading directly to loss of life, which is harm to persons. The discussion of battery fires and system limitations further indicates malfunction or safety issues related to the AI system or its integration. The direct causal link between the AI system's use and the fatalities meets the criteria for an AI Incident. The article also mentions regulatory responses, but the primary focus is on the incidents themselves, not just complementary information or hazards. Hence, the classification is AI Incident.
Thumbnail Image

蔚来事故启示录:被夸大的和被误导的自动驾驶

2021-08-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based automatic driving assistance systems (NOP, NOA, NGP) in vehicles that have been involved in fatal accidents, directly causing harm to human life. The systems are still in beta or testing phases, with known limitations and risks, and the overtrust by users influenced by marketing has contributed to these harms. The involvement of AI systems in causing injury or death meets the definition of an AI Incident. The article also discusses regulatory responses, but the primary focus is on the realized harm from AI system use in these accidents, not just potential or complementary information.
Thumbnail Image

新能源车企们的"自动驾驶",究竟有多危险?

2021-08-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (automatic driving system NOP) in a vehicle that directly led to a fatal accident, causing harm to a person. The article explicitly states the system was active during the crash, and the accident is linked to the system's limitations or failures. This meets the definition of an AI Incident, as the AI system's use directly caused injury or death. The article also discusses systemic issues of misleading marketing and safety risks, reinforcing the incident classification rather than a mere hazard or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

31岁创业者驾驶蔚来车祸去世,电动车企过度宣传混淆"辅助驾驶"和"自动驾驶"?

2021-08-15
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO Pilot, a driver assistance AI system) whose use directly led to a fatal accident. The harm is realized (death of the driver and damage to property). The confusion around the system's capabilities, partly due to marketing language, likely contributed to misuse or overreliance, which is an indirect causal factor. The system is explicitly described as AI-based driver assistance, and the accident occurred while it was active. This fits the definition of an AI Incident because the AI system's use and the misunderstanding of its capabilities directly led to harm to a person. The article also discusses broader governance and communication issues but the primary event is the fatal accident linked to AI system use.
Thumbnail Image

蔚来事故背后,"致命弯道"在辅助驾驶和自动驾驶之间

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO NOP driver assistance feature, which is an AI-based system for assisted driving. The fatal accident directly resulted in harm to a person (death of the driver) while the AI system was active, fulfilling the criteria for an AI Incident. The article also discusses systemic issues such as misleading advertising and regulatory gaps, but the core event is a realized harm caused or contributed to by the AI system's use or misuse. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来是否会陷入"特斯拉式"危机

2021-08-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fatal accident occurred while the NIO vehicle's AI-based NOP driver assistance system was active, indicating direct involvement of an AI system in the incident. Multiple other accidents involving NIO vehicles with AI-assisted driving features are also described, some resulting in fatalities or serious injuries. The discussion about driver overreliance and misleading marketing further supports the AI system's role in causing harm. Given the realized harm to human life and safety directly linked to the AI system's use, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

蔚来又出死亡事故,31岁企业家"自动驾驶"致死始末 | 钛媒体独家

2021-08-14
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving feature in a NIO vehicle. The use of this AI system directly led to a fatal traffic accident causing the death of the driver, which is a clear harm to a person. The article provides detailed information about the AI system's role, the accident circumstances, and the resulting harm. This meets the definition of an AI Incident because the AI system's use directly caused injury or harm to a person. The ongoing investigation and manufacturer statements do not negate the fact that the AI system was active and involved at the time of the fatal harm.
Thumbnail Image

31岁企业家命丧蔚来背后,消费者是试验品吗?

2021-08-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving system, which is an AI-based driver assistance technology. The use of this AI system directly led to a fatal accident, causing harm to a person, which fits the definition of an AI Incident. The article details the malfunction or limitations of the AI system in detecting hazards, the resulting death, and the legal and ethical implications. Therefore, it is not merely a hazard or complementary information but a realized harm caused by AI system use.
Thumbnail Image

【钛晨报】美一好创始人驾驶蔚来ES8车祸身亡;嘀嗒出行回应司机棍打女乘客;警方通报"阿里女员工被侵害"案:2人涉嫌强制猥亵

2021-08-15
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The Nio ES8's NOP system is an AI system providing autonomous driving assistance. The accident happened while the vehicle was in NOP mode, indicating the AI system was in control or assisting at the time of the crash. The fatality is a direct harm caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident under the definition of an event where AI system use has directly led to injury or harm to a person.
Thumbnail Image

"自动"驾驶?别忽悠了

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance systems (ADAS) used in vehicles like NIO and Tesla. These systems use AI to perform tasks such as lane keeping, adaptive cruise control, and collision warnings. The fatal accident and other reported crashes are directly linked to the use and overreliance on these AI systems, which have limitations and require human supervision. The harm includes death and injuries, fulfilling the criteria for an AI Incident. The article also discusses misleading marketing and regulatory issues, but the primary focus is on realized harm caused by AI system use and malfunction, not just potential harm or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

不要用自动驾驶考验人性

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP mode) that was active during a fatal car crash, causing the death of the driver. The article details how the AI system's limitations and the driver's reliance on it without sufficient attention led to harm. This fits the definition of an AI Incident because the AI system's use directly contributed to injury and death (harm to a person). The article also references prior similar incidents and regulatory responses, reinforcing the classification. The presence of the AI system, its use, and the resulting harm are clearly established, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来如何捍卫未来?

2021-08-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOP autonomous driving feature in a NIO vehicle. The use of this AI system directly led to a fatal traffic accident causing harm to a person, fulfilling the criteria for an AI Incident. The article details the harm (death) caused and the AI system's involvement in the accident. Although the article also covers broader business and market context, the primary AI-related event is the fatal accident caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【钛晨报】蔚来声明:公司没有任何删改数据的行为,也没有员工被警方传唤;吴亦凡涉嫌强奸罪被批准逮捕;软银集团:媒体对孙正义发言有误解,会继续投资中国市场

2021-08-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The NIO vehicle involved is an AI system (electric vehicle with advanced features). The article discusses the use of AI-related data extraction and investigation following a fatal accident, but there is no confirmed AI malfunction or misuse causing harm. The company denies data tampering and states cooperation with authorities. The event is an update on an ongoing investigation rather than a report of an AI Incident or Hazard. Other news items in the article are unrelated to AI harms. Hence, the classification is Complementary Information, as it enhances understanding of the AI system's role in the incident without confirming harm or plausible future harm caused by AI.
Thumbnail Image

【华楠直播间】自动驾驶安全谁来负责?

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver assistance system (NOP) with autonomous driving features. The use of this AI system directly led to a fatal car accident, causing harm to a person, which fits the definition of an AI Incident. The discussion of the accident, ongoing investigations, and legal considerations further confirm the AI system's pivotal role in the harm. Although the exact cause is under investigation, the AI system's involvement in the accident and resulting death is clear and direct.
Thumbnail Image

NIO蔚来,没有特斯拉的命却有特斯拉的病

2021-08-18
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes fatal accidents caused while NIO vehicles were operating with their AI-based automated driving assistance system (NOP). This system is an AI system providing automated driving assistance, and its malfunction or limitations have directly caused deaths, fulfilling the criteria for harm to persons. The article also discusses the misleading marketing of the system as 'automatic driving' when it is only Level 2 assistance, which can lead to user overreliance and accidents. The involvement of the AI system in these fatal incidents is direct and clear, and the harm is realized, not just potential. Hence, this is an AI Incident.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡!好友:他非常信任NOP

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based driving assistance system that infers from inputs to assist driving decisions. The accident occurred while the system was engaged, and the driver had significant trust in it, indicating reliance on the AI system. The fatal crash is a direct harm caused during the use of this AI system, fulfilling the criteria for an AI Incident involving injury or harm to a person. Although the investigation is ongoing, the direct link between the AI system's use and the fatal outcome is clear from the description.
Thumbnail Image

蔚来自动辅助驾驶车祸致死 李想高调点评!蔚来前公关总监犀利嘲讽

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The NIO vehicle's automatic driving assistance system qualifies as an AI system because it performs autonomous driving tasks. The fatal accident directly resulted from the use of this AI system, constituting injury and harm to a person. Therefore, this is an AI Incident. The additional commentary and calls for terminology standardization are complementary but do not change the classification of the core event.
Thumbnail Image

蔚来自动辅助驾驶致死!李想重新定义标准 周鸿祎表态

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—NIO's automatic driving assistance (NOP navigation) feature—which was active at the time of a fatal accident. The harm (death of the driver) is directly linked to the use of this AI system. The discussion about terminology and marketing practices further highlights issues of user misunderstanding and potential misuse, which are relevant to the incident's context but do not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct causal link between the AI system's use and the fatal harm.
Thumbnail Image

品牌创始人车祸身亡!律师:蔚来工作人员私自接触事故车辆被传唤

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The accidents directly involve AI systems (NIO's autonomous driving features) in operation at the time of fatal crashes, fulfilling the criteria for AI Incidents as the AI system's use has directly led to harm (death). The unauthorized access and potential tampering with vehicle data by NIO staff further implicate the AI system's role in the incident and legal responsibility. The presence of AI in the vehicles' operation and the resulting fatalities meet the definition of AI Incident under harm to persons. The report is not merely about potential harm or complementary information but describes actual harm caused by AI system use.
Thumbnail Image

蔚来自动辅助驾驶事故致死!网上仍有"自杀神器"售卖

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the automatic driving assistance (NOP pilot) in the NIO vehicle, which is an AI system that infers from sensor inputs to control vehicle behavior. The fatal accident is directly linked to the use of this AI system, constituting an AI Incident due to harm to a person (death). Furthermore, the article discusses the misuse of the AI system via aftermarket devices that trick safety sensors, increasing the risk of accidents. Since harm has already occurred and the AI system's malfunction or misuse is a contributing factor, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故致死!爱驰汽车创始人:特斯拉也不到自动驾驶 李想想法初级

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the assisted driving feature of the NIO ES8) which directly led to a fatal accident, thus causing harm to a person. The article explicitly mentions the use of an AI-based assisted driving system and the resulting death, which qualifies as an AI Incident under the definition of harm to a person caused directly or indirectly by the use or malfunction of an AI system. The discussion about the state of the technology and calls for improved safety and regulation further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

官方传唤蔚来人员 车辆数据若被篡改 蔚来公司负全责!

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The autonomous driving system qualifies as an AI system because it infers from sensor inputs to control the vehicle autonomously. The accident caused a fatality, which is a direct harm to a person. The investigation into possible data tampering by NIO personnel relates to the development or use of the AI system and its role in the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and the ongoing investigation into potential misconduct affecting the incident's outcome.
Thumbnail Image

蔚来高速自动辅助驾驶致死!有EC6车主曝光自己惊魂一刻

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP navigation assist) that failed to detect hazards, leading to a fatal accident and a near-miss incident. The harm (death and risk of collision) is directly linked to the AI system's malfunction or failure to act, fulfilling the criteria for an AI Incident. The involvement is through the use and malfunction of the AI system. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

蔚来ES8自动辅助驾驶车祸!网友都忘了现场还有辆撞翻的工程车?

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The NIO ES8's 'NOP intelligent pilot system' qualifies as an AI system providing automatic driving assistance. The accident occurred while this AI system was active, and the fatal injury to the driver is a direct harm linked to the AI system's use. Although other factors (such as the construction vehicle's behavior) contributed, the AI system's involvement in the fatal crash meets the criteria for an AI Incident because the AI system's use directly led to harm (death). The article does not merely discuss potential or future harm but reports a realized fatal incident involving AI use. Therefore, the classification is AI Incident.
Thumbnail Image

美国启动对特斯拉自动驾驶的调查:难以识别停放在路边的紧急车辆

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
Tesla's autonomous driving system is an AI system that makes real-time decisions to control the vehicle. The investigation is triggered by the system's failure to properly identify emergency vehicles, which has directly contributed to accidents causing injuries and a fatality. This meets the criteria for an AI Incident because the AI system's malfunction has directly led to harm to persons. The discussion about terminology by the Li Auto CEO is complementary but does not change the classification of the main event.
Thumbnail Image

周鸿祎力挺李想"自动驾驶"观点!哪吒汽车:不能因为营销误导用户

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system (the automatic driving function) was in use at the time of the accident, which directly led to a fatal incident (harm to a person). Although the exact cause is still under investigation, the involvement of the AI system in the accident is explicit and the harm has occurred. The discussion about terminology and marketing relates to the use and understanding of AI systems in autonomous driving, highlighting risks of user misunderstanding that can contribute to harm. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

林文钦亲友质疑蔚来声明:有人看到蔚来工作人员给事故车接线充电

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The vehicle was operating with its autonomous driving feature enabled, which is an AI system. The fatal accident directly caused harm to a person, fulfilling the criteria for an AI Incident. The controversy about the handling of the vehicle post-accident relates to investigation procedures but does not negate the AI system's involvement in the harm. Hence, the event is classified as an AI Incident due to the direct link between the AI system's use and the fatal harm.
Thumbnail Image

自动驾驶引发的致死车祸 没有一方是无辜的

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves a Level 2 assisted driving AI system that was active during a fatal car crash, directly leading to the death of a person. The article discusses the AI system's technical limitations, user responsibilities, and industry practices contributing to the harm. The AI system's involvement is explicit and central to the incident, and the harm (death) has occurred. Hence, this meets the criteria for an AI Incident as the AI system's use and possible malfunction or limitations directly led to injury and death.
Thumbnail Image

31岁企业家开蔚来自动驾驶出车祸身亡 李想呼吁统一自动驾驶中文名词标准

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (NIO's autopilot/automatic driving assistance) whose activation directly led to a fatal accident, fulfilling the criteria for an AI Incident due to harm to a person. The subsequent discussion about terminology standardization and realistic communication about AI capabilities is complementary information but does not negate the primary classification of the fatal accident as an AI Incident. Therefore, the event is primarily an AI Incident with complementary information present.
Thumbnail Image

31岁企业家林文钦开蔚来自动驾驶后身亡 蔚来发布声明

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving feature of the NIO ES8. The use of this AI system directly led to a fatal traffic accident, causing injury and death, which qualifies as harm to a person. Therefore, this is an AI Incident due to the direct involvement of an AI system in causing harm.
Thumbnail Image

蔚来称没有员工被传唤:事故车断电作业不会造成数据丢失

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The article focuses on the company's statement clarifying facts about the accident investigation and data handling related to the AI system in the vehicle. There is no indication of new harm caused by AI, nor a plausible future harm scenario presented. The event is an update and response to previous concerns, thus it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

蔚来NOP领航功能事故致死!林文钦亲属发声:将开始对事故车辆鉴定

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing automated driving assistance. The accident resulted in the death of the driver, which is a direct harm to a person caused by the use of the AI system. The article explicitly links the use of the AI system to the fatal crash. Although the company clarifies that NOP is not full autonomous driving and requires driver attention, the system's involvement in the accident is clear. Hence, this is an AI Incident as the AI system's use directly led to harm (death).
Thumbnail Image

蔚来高速自动辅助驾驶事故致死!L2级出事责任在谁?威马沈晖表态

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (L2-level automated driving assistance) whose use directly led to a fatal traffic accident, causing harm to a person. The AI system's role is pivotal as it was engaged during the accident, and the discussion centers on responsibility related to the AI system's capabilities and limitations. This meets the definition of an AI Incident because there is direct harm to a person resulting from the use of an AI system. The article also references ongoing investigations and public discourse about the AI system's safety and marketing, but the primary focus is the fatal incident itself, not just complementary information or potential hazards.
Thumbnail Image

蔚来曾宣称比人驾驶更安全!林文钦购车前曾多次询问自动驾驶

2021-08-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the autonomous driving system of the NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person. The deceased had relied on the AI system's capabilities, which were a core reason for purchasing the vehicle. The harm (death) is realized and directly linked to the AI system's operation. Hence, this meets the criteria for an AI Incident as per the definitions provided.
Thumbnail Image

官方通报"蔚来自动辅助驾驶车祸事件":撞击前方施工作业车辆致亡

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car crash involving an AI system (NIO's automatic driving assistance). The AI system's use directly led to harm (death of the driver and injury to another person). Although the exact cause is under investigation, the AI system's involvement in the accident is clear and central. Therefore, this qualifies as an AI Incident due to direct harm caused by the use of an AI system in a real-world scenario.
Thumbnail Image

高速自动辅助驾驶事故致死!媒体实测蔚来AEB:测试成绩一言难尽

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the automatic driving and AEB system in a NIO vehicle. The system's malfunction or insufficient performance directly led to a fatal traffic accident, causing injury and death, which fits the definition of an AI Incident. The article provides evidence of the AI system's failure to prevent collisions, confirming direct harm caused by the AI system's use. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ES8自动辅助驾驶车祸致死!蔚来:NOP不是自动驾驶 正在调查

2021-08-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based driver assistance system that infers from sensor inputs to assist driving but requires driver supervision. The fatal crash occurred while this system was active, indicating the AI system's involvement in the event. The harm is direct (death of a person) and linked to the AI system's use. Although the manufacturer clarifies that NOP is not full autonomous driving, it is an AI system whose malfunction or misuse contributed to the incident. Therefore, this qualifies as an AI Incident under the definition of harm to a person caused directly or indirectly by the use of an AI system.
Thumbnail Image

蔚来又惹祸!辅助驾驶的锅 到底谁来背?

2021-08-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the assisted driving (NOP) function in a NIO vehicle. The use of this AI system directly led to a fatal traffic accident, causing injury and death, which qualifies as harm to a person. The article details the malfunction or limitations of the AI system and its role in the incident. Therefore, this is an AI Incident as the AI system's use directly caused harm.
Thumbnail Image

蔚来自动辅助驾驶事故致驾驶员死亡!此前蔚来副总裁演示车内吃饭

2021-08-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP pilot) that was active during a fatal crash. The harm (death) is directly linked to the use of this AI system. The article discusses the system's limitations and the necessity of driver attention, indicating that misuse or overreliance on the AI likely contributed to the incident. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death, fulfilling the harm criteria (a).
Thumbnail Image

车主发布NOP系统认知联合声明 蔚来回应:与官方无关

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an assisted driving AI system (NOP), which was in use at the time of a fatal traffic accident causing the death of the driver. This meets the definition of an AI Incident because the AI system's use directly led to harm to a person. The collective statement by car owners and the manufacturer's response provide context but do not negate the incident classification. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident.
Thumbnail Image

蔚来回应连锁品牌创始人车祸逝世 李斌悼念!当时行车数据曝光

2021-08-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the driver was using the NIO Pilot automatic assisted driving system at the time of the fatal crash. The AI system's involvement is clear, and the harm (death of the driver) has occurred. Although the company clarifies that the system is not full autonomous driving, the assisted driving AI system was active and thus played a role in the incident. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death, fulfilling the harm criteria (a).
Thumbnail Image

蔚来自动辅助驾驶致死!车主购车推荐人发声:极度自责 蔚来请不要套路

2021-08-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot assisted driving) whose use directly led to a fatal accident, causing harm to a person. The AI system's malfunction or failure is implicated, and the investigation focuses on data from the AI system. This meets the criteria for an AI Incident as the AI system's use directly caused injury or death. The presence of the AI system is explicit, the harm is realized, and the event is not merely a potential hazard or complementary information but a concrete incident.
Thumbnail Image

一蔚来车主不认可"蔚来车主声明":不想被别人代表

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a NIO vehicle using its NP/NOP assisted driving system, which is an AI system. The accident caused death and injury, fulfilling the harm criteria. The AI system's use is directly linked to the incident, even though the precise cause is still under investigation. The public and media discussion about the system's nature and the owners' statements further confirm the AI system's central role. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

蔚来车主对NOP系统认知发表联合声明 已超500名车主签署

2021-08-18
驱动之家
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based assisted driving system that requires driver supervision. The fatal accident occurred while the system was engaged, indicating the AI system's involvement in the incident. The harm (death of a person) has directly resulted from the use of this AI system. The joint statement by users clarifies the system's intended use but does not negate the fact that the AI system's use led to a fatal incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

一月内致2死!蔚来ES8自动辅助驾驶带走商界大佬 现场太惨烈

2021-08-14
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the NIO ES8 was operating in automatic driving mode (NOP navigation state) at the time of the fatal crash, indicating AI system involvement in the use phase. The crash caused death, fulfilling the harm criterion. The second accident, while less detailed about AI involvement, involves the same brand and similar context, reinforcing concerns about AI system safety. Therefore, these are AI Incidents due to direct harm caused by the AI system's use.
Thumbnail Image

美一好创始人林文钦去世:驾驶蔚来ES8发生事故 生前曾创多个品牌

2021-08-14
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the driver was using the NIO ES8's autonomous driving function when the accident occurred, resulting in the driver's death. This shows direct involvement of an AI system (autonomous driving) in causing harm to a person. The harm is realized and severe (fatality), meeting the criteria for an AI Incident. The mention of other accidents involving the same vehicle model further supports the relevance of the AI system's role in safety risks.
Thumbnail Image

31岁企业家开蔚来ES8高速路撞车离世 蔚来:辅助驾驶≠自动驾驶

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's assisted driving mode) whose malfunction (failure to detect and brake for a construction vehicle) directly led to a fatal injury, fulfilling the criteria for an AI Incident. The system is not fully autonomous but provides advanced AI-based assistance, and its failure caused direct harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

特斯拉蔚来们鼓吹的自动驾驶 故事还是事故?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO vehicle's Navigate on Pilot driver assistance system) that was active at the time of a fatal traffic accident. The system is an AI-enabled driver assistance technology that influences vehicle control. The harm (death of the driver) directly resulted from the use of this AI system. The article also discusses the broader context of misleading marketing and the risks of overreliance on such systems, but the key point is the realized harm linked to the AI system's use. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

31岁创始人驾驶蔚来启用NOP遇难,蔚来:没有任何删改数据的行为

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the NOP autonomous driving feature, which is an AI-enabled driver assistance system. The use of this AI system directly led to a fatal traffic accident, causing harm to a person. Therefore, this qualifies as an AI Incident because the AI system's use directly resulted in injury and death. The company's cooperation and data handling are complementary details but do not change the classification of the event as an AI Incident.
Thumbnail Image

​15天内2起死亡事故!90后企业家"自动驾驶"蔚来高速丧生

2021-08-16
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI-based advanced driver assistance system that controls vehicle functions such as lane keeping and adaptive cruise control. The accident occurred while the system was active, and the driver relied on it. The fatal crash and resulting death constitute injury or harm to a person caused directly or indirectly by the AI system's use. Therefore, this event meets the criteria for an AI Incident due to the realized harm caused by the AI system's involvement in the accident.
Thumbnail Image

自动驾驶到了反思时刻 死亡铺就成功之路?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Navigate on Pilot (NOP) autonomous driving assistance system, which was active during the fatal crash. The article details how the AI system failed to detect static obstacles and did not engage emergency braking, leading directly to the driver's death. The harm is a fatal injury to a person, which fits the definition of an AI Incident. The article also discusses systemic issues with current autonomous driving AI systems, including sensor limitations, algorithmic challenges, and overreliance by users, reinforcing the direct causation link. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

"自动驾驶"事故频发,蔚来深陷"特斯拉"同款困境

2021-08-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as automatic driving or advanced driver-assistance systems that have malfunctioned or been misused, resulting in multiple fatal accidents and severe injuries. The harm to human life is direct and significant. The article also discusses the overreliance on these AI systems due to misleading marketing, which indirectly contributes to the harm. Given the realized harm to persons caused by the development, use, and malfunction of these AI systems, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

业内人士呼吁:自动驾驶试错别让消费者买单 应限制功能开启

2021-08-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of L2 assisted driving and L4 autonomous driving systems. It reports on actual fatal accidents linked to the use of L2 assisted driving systems, indicating direct harm to human health and life. The AI system's malfunction or misuse (e.g., enabling assisted driving in inappropriate conditions without sufficient driver attention) is a contributing factor to these harms. The discussion about regulatory gaps and the unfairness of passing trial-and-error risks to consumers further supports the classification as an AI Incident. The presence of realized harm (fatal accidents) caused directly or indirectly by AI system use meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

威马创始人谈始辅助驾驶事故频发:L2级驾驶员是责任主体

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of L2 and L4 autonomous driving functionalities. It discusses the use and responsibility aspects of these AI systems but does not report any specific accident or harm caused by the AI systems themselves. Instead, it provides commentary on responsibility attribution and design choices to improve safety. There is no direct or indirect report of harm or incident caused by the AI systems, nor a specific plausible future harm event described. Therefore, this is complementary information providing context and governance perspective on AI system use and safety, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

31岁企业家车祸去世谁之责?车主好友发声

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle equipped with an AI-based assisted driving system (NOP). The system was active at the time of the crash, and the driver died. The AI system's involvement is explicit and central to the incident, as the system's capabilities and limitations are under scrutiny, and the accident's cause may relate to the AI system's function or misuse. The harm (death) has materialized, fulfilling the criteria for an AI Incident. The ongoing investigation and data withholding issues further emphasize the AI system's pivotal role in the harm. Thus, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

两次致死事故后再看蔚来 还是曾经那个"用户企业"吗?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's Navigate on Pilot (NOP) system, an AI-based driver assistance system, during the fatal accidents. The accidents resulted in deaths, which is a direct harm to persons. The AI system's malfunction or limitations contributed to these harms. Although the company states that NOP is not full autonomous driving and requires user attention, the system's involvement is pivotal in the incidents. Hence, the event meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

辅助驾驶车祸案律师称技术人员私自接触涉案车辆 蔚来:正内部了解情况

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the NOP assisted driving system) that was in use at the time of a fatal accident, fulfilling the criteria for an AI system's involvement. The harm (death of the driver) has occurred, making this an AI Incident rather than a hazard. The unauthorized access and potential tampering with vehicle data by the manufacturer's staff is relevant to the investigation but does not negate the AI system's role in the incident. The AI system's inability to detect static obstacles and the requirement for driver oversight are central to understanding the cause and responsibility. Therefore, this event meets the definition of an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

蔚来辅助驾驶致死!车主购车推荐人发声:极度自责

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Navigate on Pilot (NOP) assisted driving feature, which is an AI-based driver assistance system. The fatal accident occurred while this system was in use, directly causing harm (death). The involvement of the AI system in the development, use, or malfunction leading to harm meets the criteria for an AI Incident. The ongoing investigation and data extraction issues further support the significance of the AI system's role in the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

知名企业创始人车祸去世!一个月内两位蔚来汽车驾驶人事故身亡

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's NOP driver assistance system, which is an AI-based technology, at the time of the accidents. The accidents led to fatalities, which is a direct harm to persons. The AI system's involvement is clear and causally linked to the incidents, as the vehicles were operating with the AI-assisted driving feature engaged. The harm is realized and significant, meeting the definition of an AI Incident. The company's statements and ongoing investigations do not negate the AI system's role in the accidents. Hence, the event is classified as an AI Incident.
Thumbnail Image

蔚来L2致死事故背后:为什么就是避不开静止车辆?

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot/NOP L2 autonomous driving system, which was active during the accident. The system's failure to detect a slow or stationary vehicle directly led to a fatal collision, causing injury and death, which fits the definition of an AI Incident. The harm is realized and directly linked to the AI system's malfunction and use. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

质疑安全性!美国对特斯拉自动驾驶系统展开正式调查

2021-08-16
华尔街见闻
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot system qualifies as an AI system because it performs autonomous driving tasks such as lane keeping and traffic-aware cruise control. The investigation is due to multiple accidents where the system was engaged and contributed to collisions causing injuries and deaths, fulfilling the criteria for an AI Incident. The harm is direct and materialized, involving injury and fatality, and the AI system's malfunction or misuse is a contributing factor. Therefore, this event is classified as an AI Incident.
Thumbnail Image

蔚来回应"员工私自接触涉案车辆":没有删改数据 未有员工被传唤

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle operating under an AI-assisted navigation system (NOP). The AI system's use is directly linked to the harm (death). Additionally, the controversy over unauthorized access to the vehicle and potential data tampering relates to legal and investigative integrity, which falls under violations of obligations under applicable law. NIO's denial does not negate the incident classification since the accident and AI involvement are factual. Hence, this is an AI Incident involving harm to a person and potential legal violations related to the AI system's data.
Thumbnail Image

蔚来是否会陷入"特斯拉式"危机

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot assisted driving system) whose use directly led to a fatal traffic accident, causing harm to a person (death). The system is an AI-based assisted driving technology that influences vehicle control and decision-making. The harm (fatal injury) is directly linked to the AI system's operation during the accident. Multiple similar incidents and concerns about the safety and marketing of such systems are discussed, reinforcing the classification as an AI Incident. The article also references regulatory responses, but the primary focus is on the realized harm caused by the AI system's use, not just potential or complementary information.
Thumbnail Image

【虎嗅早报】中纪委评张哲瀚事件:过线者,必受惩戒;蔚来车主事故身亡,行驶数据浮出水面

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NIO Pilot autonomous driving system) during the fatal accident. The system was active (NOP mode) when the crash occurred, and there is speculation that the system may have failed to detect a road obstacle, contributing to the accident. This constitutes direct harm to a person (death) caused by the AI system's use or malfunction. Hence, it meets the criteria for an AI Incident as defined, involving injury or harm to a person directly linked to the AI system's operation.
Thumbnail Image

理想汽车创始人发声 呼吁统一自动驾驶中文名词标准

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous driving levels (L2 to L5) and their use in vehicles. The fatal accident involving a user operating the assisted driving system (NOP) and resulting in death is a direct harm to a person caused indirectly by the AI system's use and misuse. The founder's call for standardized terminology is a response to prevent misunderstanding and misuse, which is complementary but the main event includes a fatal incident. Hence, the event meets the criteria for an AI Incident as the AI system's use and misuse have directly led to harm to a person.
Thumbnail Image

丑闻不断 销量跌落神坛 蔚来汽车怎么了?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system: the NIO vehicle's automated driving assistance (NOP mode). The system's malfunction or failure to detect obstacles directly led to multiple traffic accidents, including a fatality, which constitutes injury or harm to a person. The harm is realized, not just potential. The company's denial of providing 'automatic driving' does not negate the fact that the system's use and malfunction caused harm. The event meets the criteria for an AI Incident because the AI system's use and malfunction have directly or indirectly led to harm to people and property.
Thumbnail Image

网约车司机的"职业生涯"只有五年了?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving feature (NOP) that took partial control of the vehicle. The fatal accident occurred while this AI system was engaged, indicating a malfunction or failure in its operation. This directly led to the death of a person, fulfilling the harm criterion (a) injury or harm to a person. The article also discusses the broader implications for autonomous driving technology and safety, but the core event is the fatal accident caused by the AI system's use. Hence, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

半月谈:"自动驾驶"概念满天飞 我们的双手可以离开方向盘了吗?

2021-08-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of automatic driving assistance technology and discusses a fatal accident under investigation, but it does not confirm that the AI system malfunctioned or directly caused the harm. It mainly provides analysis, clarifications, and recommendations regarding the technology's current capabilities and public understanding. There is no new AI Incident or AI Hazard reported, nor is there a focus on governance or societal response measures beyond general suggestions. Therefore, this article fits best as Complementary Information, enhancing understanding of AI systems and their societal context without reporting a new incident or hazard.
Thumbnail Image

莆田交警通报31岁企业家驾驶蔚来遇难:将依法做出责任认定

2021-08-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—the NIO vehicle's autopilot (NOP) autonomous driving feature. The use of this AI system directly led to a fatal traffic accident causing death and injury, which qualifies as harm to persons. The investigation and the company's statement about data handling are complementary details but do not negate the fact that the AI system's use was involved in the incident. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

蔚来车主之死背后,自动驾驶以"命"营销?

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP) that was active during the fatal accident. The system's inability to detect static obstacles (a known technical limitation) directly led to the collision and death of the driver, fulfilling the criteria for harm to a person. The article also highlights misleading marketing practices that may have caused overreliance on the system, further contributing indirectly to the harm. Therefore, this qualifies as an AI Incident due to the direct and indirect causal role of the AI system's malfunction and use in causing injury and death.
Thumbnail Image

自动驾驶聪明么?这17款车驾驶座上没人都能跑

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The driver assistance systems described are AI systems that perform autonomous or semi-autonomous driving tasks such as adaptive cruise control and lane centering. The tests reveal that these AI systems can be fooled or bypassed, allowing the vehicle to operate without a driver present or attentive, which directly creates a risk of harm to people. Since the misuse has been demonstrated and the systems failed to prevent it, this constitutes an AI Incident involving indirect harm through misuse of AI systems leading to potential or actual safety hazards. The article reports on actual tests and outcomes, not just potential risks, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

美国彻查Autopilot安全性 特斯拉AI日尴尬了

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system that controls vehicle driving functions. The reported accidents and fatalities directly involve the use of this AI system, indicating harm to persons. The investigation and potential recall stem from these harms caused by the AI system's malfunction or limitations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to injury and death, fulfilling the criteria for harm to persons under the AI Incident definition.
Thumbnail Image

事故方和蔚来的持续对峙

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves a fatal accident caused by the use of NIO's assisted driving AI system, which directly led to harm (death). The article explicitly discusses the AI system's involvement, the legal and public disputes about data tampering, and the company's response to the incident. This meets the criteria for an AI Incident because the AI system's use has directly led to injury or harm to a person. The ongoing dispute and reputational damage further underscore the incident's significance. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

蔚来的"特斯拉式"困局

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (NIO Pilot, Tesla Autopilot/FSD) being used during fatal crashes, with the AI systems playing a direct role in the incidents. The harm (death of drivers) has occurred and is linked to the AI systems' malfunction or limitations. The article also references regulatory investigations, which further confirms the AI systems' involvement in causing harm. Hence, this event qualifies as an AI Incident due to direct harm caused by AI system use and malfunction.
Thumbnail Image

痛心!知名企业家车祸去世或因领航辅助?车企回应:正在调查

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the Navigate on Pilot (NOP) driver assistance feature, which is an AI-based system that assists driving by navigating and controlling the vehicle under certain conditions. The fatal accident occurred while this system was active, and the harm (death of the driver) is directly linked to the use of this AI system. Although the investigation is ongoing, the AI system's involvement in the accident is central to the event. The harm is materialized and severe (fatal injury), meeting the criteria for an AI Incident under the definition of injury or harm to a person caused directly or indirectly by the AI system's use or malfunction. The event is not merely a potential hazard or complementary information but a concrete incident with serious consequences.
Thumbnail Image

蔚来事故敲碎自动驾驶幻想?工信部新规明确车企义务

2021-08-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP mode) that was active during a fatal traffic accident causing death and injury. The article details how the AI system's technical limitations and the overtrust by the user contributed to the harm. The new regulations issued by the MIIT are a response to this incident and broader industry issues. Since the AI system's use directly or indirectly led to harm to a person, this qualifies as an AI Incident under the OECD framework. The article also discusses regulatory responses, but the primary focus is the incident and its consequences, not just complementary information.
Thumbnail Image

男子驾驶蔚来车祸逝世 蔚来回应正在调查 无人驾驶板块或受冲击?

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The NIO ES8's Navigate on Pilot system is an AI-based assisted driving system. The fatal accident occurred while this system was activated, directly linking the AI system's use to a serious harm (death). This meets the definition of an AI Incident because the AI system's use has directly led to injury or harm to a person. The article also mentions ongoing investigations and company responses, but these are secondary to the primary incident. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来汽车:断电作业不会造成数据丢失

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred to be involved because modern NIO vehicles typically incorporate AI systems for autonomous driving assistance, data logging, and safety features. The event involves the use and malfunction (collision) of such a vehicle, leading to a fatality (harm to a person). The data extraction and investigation relate to the AI system's operation and performance. Therefore, the AI system's use and malfunction have directly led to harm (fatal accident). This qualifies as an AI Incident under the framework, as the AI system's involvement is central to the event and harm.
Thumbnail Image

蔚来首起致死车祸!企业家车主高速事故丧生 智能驾驶能力再受质疑

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO Pilot's Navigate on Pilot feature) in use at the time of a fatal traffic accident, which directly caused harm (death of a person). The AI system's malfunction or failure to prevent the accident is central to the incident. This meets the definition of an AI Incident because the development and use of the AI system directly led to injury and death (harm to a person). The article also discusses regulatory responses, but the primary focus is the fatal accident linked to the AI system's operation, confirming classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故启示录:被夸大的和被误导的自动驾驶

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's automatic driving assistance system (NOP), an AI system designed to assist driving tasks. The fatal accidents occurred while the system was active, indicating direct involvement of the AI system's use and its limitations in causing harm (death). The article also references similar incidents with Tesla's Autopilot, reinforcing the pattern of AI system malfunction or misuse leading to injury or death. The discussion of regulatory measures further confirms the recognition of these harms. Hence, this is a clear case of an AI Incident due to direct harm caused by AI system use and malfunction.
Thumbnail Image

开启蔚来电动车自驾功能 青年企业家追尾身亡

2021-08-17
早报
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system explicitly mentioned as the autonomous driving assistance feature of the NIO electric vehicle. The use of this AI system directly led to a fatal injury, fulfilling the criteria for an AI Incident as the AI system's malfunction or failure contributed to the harm. Therefore, this event qualifies as an AI Incident due to the direct causal link between the AI system's use and the fatal harm.
Thumbnail Image

不要尝试挑战人性,放弃自动驾驶幻想

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of assisted driving features that have been linked to fatal accidents, causing injury and death (harm to persons). The AI systems' limitations in recognizing obstacles and the human factors of overreliance and misuse are central to the incidents. The involvement of regulatory bodies and investigations further confirms the significance of the harm caused. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the use and malfunction of AI systems in assisted driving.
Thumbnail Image

品牌创始人车祸身亡!律师:蔚来工作人员私自接触事故车辆被传唤

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (NIO Pilot autonomous driving feature) during a fatal car accident, which caused the death of the driver. This meets the criteria for an AI Incident because the AI system's use directly led to harm (death). The investigation into possible tampering with vehicle data by the manufacturer's staff further underscores the AI system's central role in the incident. Hence, the event is classified as an AI Incident.
Thumbnail Image

聚焦四大争议!31岁企业家驾蔚来车祸离世,祸因"自动驾驶"?谁之责?

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an AI-assisted driving system) actively used during a fatal car crash causing injury and death, which fits the definition of an AI Incident. The AI system's malfunction or limitations, or its interaction with the driver, are central to the harm caused. The article details the direct involvement of the AI system in the accident, the ongoing investigation, and the controversy over the system's capabilities and responsibilities. This is not merely a potential risk or complementary information but a realized harm linked to AI system use.
Thumbnail Image

李想倡议自动驾驶用词统一标准,周鸿祎响应:车企不应过度营销

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous driving technologies) and their use, but it does not describe any actual harm or malfunction caused by these systems. There is no direct or indirect harm reported, nor is there a credible risk of imminent harm described. The discussion is about terminology standardization and marketing ethics to prevent user confusion, which is a governance and societal response to AI deployment. Therefore, this event fits the category of Complementary Information as it provides context and response to AI system deployment without reporting an incident or hazard.
Thumbnail Image

一个月两位蔚来驾驶人事故身亡,“NOP自动辅助”要背锅吗

2021-08-15
The Paper
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based assisted driving system that infers from sensor inputs to provide navigation and driving assistance. The accidents occurred while this system was engaged, leading to fatalities and property damage. This constitutes direct harm caused by the use or malfunction of an AI system. The article discusses the system's role and the ongoing investigations, indicating the AI system's involvement in the harm. Hence, this is an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

蔚来致死事故后 希望"L2.5"们也跟着去世

2021-08-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automated driving assistance system) whose malfunction directly caused a fatal accident, fulfilling the criteria for an AI Incident due to harm to a person. The article explicitly describes the AI system's failure to detect obstacles leading to the crash and death. The discussion of over-marketing and regulatory responses supports understanding but does not overshadow the primary incident. Therefore, the classification is AI Incident.
Thumbnail Image

最新进展:蔚来技术人员私自接触林文钦车祸事故车被警方传唤

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle using an AI-based assisted driving system (NOP). The AI system's failure to recognize an obstacle and the subsequent crash caused injury and death, fulfilling the criteria for harm to persons. The involvement of the AI system is explicit, and the investigation into data tampering by the manufacturer's staff further underscores the AI system's central role in the incident. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use directly led to significant harm.
Thumbnail Image

风暴眼 | 蔚来"车毁人亡"事故背后 辅助驾驶的边界被车企忽视

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP assisted driving system) whose use has directly led to fatal and serious accidents, causing harm to human life. The system is an AI-based driver assistance technology that requires active driver supervision, but misleading marketing and user misunderstanding have contributed to misuse and accidents. The harm is realized and directly linked to the AI system's role in vehicle control and safety. Therefore, this qualifies as an AI Incident under the framework, as it involves injury and harm to persons caused by the development and use of an AI system.
Thumbnail Image

蔚来"渡劫" 自动驾驶穿越生死线

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver assistance system with autonomous navigation capabilities (NOP). The fatal accident occurred while the system was active, and the article highlights how reliance on this AI system and its technological limitations contributed to the harm (death of the driver). This meets the criteria for an AI Incident because the AI system's use directly led to injury and death. The article also discusses broader implications and challenges but the primary focus is on the realized harm caused by the AI system's involvement in the accident.
Thumbnail Image

新能源汽车警钟敲响 夸大宣传亟待纠偏

2021-08-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: the driver assistance (autopilot) features in new energy vehicles, which are AI systems that infer from sensor inputs to assist driving. The fatal accident directly involved the use of such an AI system, and the article documents multiple cases where these AI systems' limitations or malfunctions have led to injury or death, fulfilling the harm criteria. The article also discusses misleading marketing that causes users to overtrust these AI systems, indirectly contributing to harm. Hence, the AI system's use and malfunction have directly or indirectly led to harm to persons, meeting the definition of an AI Incident. The article also calls for regulatory intervention to address these harms and prevent further incidents.
Thumbnail Image

【钛晨报】美一好创始人驾驶蔚来ES8车祸身亡;警方通报"阿里女员工被侵害"案:2人涉嫌强制猥亵;荣耀CEO赵明:要夺回属于荣耀的高端手机市场份额

2021-08-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Nio's Navigate on Pilot autonomous driving feature) that was active during the fatal crash. The death of the driver is a direct harm caused during the use of this AI system. The AI system's role is pivotal as it was controlling the vehicle at the time of the accident. This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The report does not merely warn of potential harm but documents an actual fatality linked to the AI system's operation.
Thumbnail Image

企业家“萌剑客”车祸去世,蔚来回应:正在调查

2021-08-14
The Paper
Why's our monitor labelling this an incident or hazard?
The NIO ES8's Navigate on Pilot (NOP) is an AI-based driver assistance system that controls vehicle navigation and driving under certain conditions. The fatal accident occurred while this system was active, indicating the AI system's involvement in the incident. The harm (death of the driver) is a direct consequence linked to the AI system's use or malfunction. The article also discusses ongoing investigations and regulatory attention to AI driving safety, but the primary event is the realized harm caused by the AI system's operation. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【社论】“自动驾驶”还得再等等

2021-08-16
The Paper
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an automated driving assistance system that was active during a fatal car crash. The AI system's use (driver assistance/autonomous driving features) directly contributed to the harm (death of the driver). The article clearly links the AI system's malfunction or limitations and the misuse or misunderstanding of its capabilities to the incident. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death, fulfilling the criteria for harm to persons.
Thumbnail Image

观察|三问“自动驾驶”:过度营销?安全边界?车主教育?

2021-08-17
The Paper
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically assisted driving and autonomous driving technologies (e.g., NIO's NOP system, L2/L3 level driving assistance). It discusses real incidents where these systems' use or misuse has led to fatal accidents, indicating direct or indirect harm to human health. The discussion about over-marketing and misleading terminology contributes to user misunderstanding and misuse, which is a factor in these harms. The article also addresses the safety boundaries and the need for clear communication and education to prevent further harm. Hence, this qualifies as an AI Incident because the AI system's use and potential malfunction or misuse have directly or indirectly caused harm (fatal car crashes) and raise significant safety and rights concerns.
Thumbnail Image

蔚来车主事故身亡 自动驾驶再陷争议

2021-08-17
caixin.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the vehicle's automatic driving function was enabled during the fatal accident, indicating the involvement of an AI system. The harm (death of the driver) has occurred, fulfilling the criteria for injury or harm to a person. The AI system's malfunction or failure to prevent the collision is directly connected to the harm. Hence, this event meets the definition of an AI Incident.
Thumbnail Image

T早报|闻泰收购英国晶圆厂交易完成过户;蔚来车主因车祸逝世 或因开启辅助驾驶功能;第四范式递交招股书

2021-08-16
caixin.com
Why's our monitor labelling this an incident or hazard?
The NOP pilot feature is an AI system providing assisted driving capabilities. The accident and resulting death occurred while this AI system was in use, indicating the AI system's involvement in the incident. The harm is direct and severe (fatality). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury or harm to a person.
Thumbnail Image

31岁创始人驾驶蔚来ES8遭遇车祸身亡,行驶数据浮出水面,特斯拉、小鹏汽车有相似功能

2021-08-16
Techweb
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO Pilot with NOP feature) actively used during the accident, which directly caused harm (death of the driver). The system's inability to detect static obstacles and the fatal crash demonstrate a malfunction or limitation of the AI system leading to injury and death, fulfilling the criteria for an AI Incident. The detailed data on system usage and the fatal outcome confirm direct harm. Although regulatory responses are mentioned, they serve as complementary context and do not overshadow the primary incident. Hence, the classification is AI Incident.
Thumbnail Image

蔚来人员私自接触事故车辆被传唤:未经同意私自接触涉案车辆

2021-08-16
Techweb
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an AI system (NIO's autonomous driving feature) that directly led to harm (death). The subsequent unauthorized access by company personnel to the accident vehicle and potential data tampering further implicates the AI system's role in the incident and legal responsibility. Therefore, this qualifies as an AI Incident due to realized harm and misuse related to the AI system.
Thumbnail Image

蔚来NOP致死 副总裁沈斐或遇更大危机

2021-08-15
Techweb
Why's our monitor labelling this an incident or hazard?
The NIO NOP is an AI system providing assisted driving capabilities. The fatal accident directly involves the use of this AI system, which has led to injury and death, fulfilling the criteria for an AI Incident. The involvement of the AI system in the accident is explicit, and the harm (death) has occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来最大危机:连发两起致死事故,多位车主称开自动辅助出车祸

2021-08-16
Techweb
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of NIO's automated driving assistance features (NOP and NP), which are AI systems that infer from input (sensor data, maps) to generate driving decisions. The accidents described have resulted in fatalities and injuries, fulfilling the harm criteria. The AI system's failure to detect obstacles and the overreliance by drivers on these systems are direct contributing factors to the accidents. This meets the definition of an AI Incident, as the AI system's use and malfunction have directly led to harm to persons. The article also discusses the broader implications and responses but the primary focus is on the incidents themselves.
Thumbnail Image

自动驾驶引发的致死车祸,没有一方是无辜的

2021-08-17
huxiu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as L2 level assisted driving, which is an AI system by definition. The fatal crash directly resulted from the use of this AI system, with the article discussing the system's limitations and the driver's role. The harm (death) has occurred, fulfilling the criteria for an AI Incident. The article also discusses systemic issues in the industry and regulatory environment but the core event is a realized harm caused directly or indirectly by the AI system's use and limitations. Hence, the classification is AI Incident.
Thumbnail Image

31岁企业家命丧蔚来汽车,是谁惹的祸?

2021-08-15
huxiu.com
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI system providing assisted driving functions such as adaptive cruise control, lane keeping, and navigation-based pilot assistance. The accidents described involved the system being active and failing to detect static obstacles, leading to collisions causing fatalities and injuries. This meets the definition of an AI Incident because the AI system's use and malfunction directly led to injury and harm to persons. The article also discusses systemic issues such as inadequate user education and corporate responsibility, reinforcing the direct link between AI system use and harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

专家解读蔚来事故 责任主体如何认定?

2021-08-15
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the Navigate on Pilot (NOP) L2 driver-assistance system. The system was active during the fatal accident, and the harm (death of the driver) has occurred. Experts suggest the accident likely resulted from user overreliance or misuse rather than a system malfunction, but the AI system's involvement in the chain of events leading to harm is clear. This meets the definition of an AI Incident because the AI system's use has indirectly led to injury and death. The article does not merely discuss potential risks or responses but reports on a realized harm linked to AI system use. Hence, the classification is AI Incident.
Thumbnail Image

2021-08-18
雪球
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in autonomous driving and their involvement in fatal accidents, which are AI Incidents. However, the article mainly summarizes known incidents, regulatory actions, and technological challenges without reporting new incidents or hazards. It focuses on the broader context and ongoing developments in AI for autonomous driving, including calls for theoretical breakthroughs and regulatory management. Therefore, it serves as Complementary Information by providing updates and context on previously reported AI Incidents and the AI ecosystem rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

自动驾驶任重而道远

2021-08-15
雪球
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of assisted driving technologies, which are AI-based driver assistance systems. It discusses the use and misuse of these systems and the potential safety risks arising from overreliance or misunderstanding of their capabilities. Although no specific incident of harm is reported, the article clearly identifies plausible future harm (e.g., accidents or injuries) if users treat assisted driving as full autonomous driving. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm stemming from the use or misuse of AI systems in driving assistance.
Thumbnail Image

对消费者而言;安全才是最大的豪华

2021-08-15
雪球
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of advanced driver-assistance and semi-autonomous driving technologies (L2, L2+, NOP, NOA, NGP) that influence vehicle control. It discusses the development, marketing, and use of these AI systems and the associated risks, including driver overreliance and regulatory gaps. While it does not report a specific accident or harm caused by these systems, it warns of plausible future harms such as accidents and safety risks due to misuse or overtrust in immature AI driving technologies. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to injury or harm to people. There is no description of a realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the risks and responsibilities tied to AI system use in vehicles.
Thumbnail Image

还原蔚来 ES8 沈海高速自动驾驶事故,疑因「未能识别前方道路施工车」再酿惨剧

2021-08-15
雪球
Why's our monitor labelling this an incident or hazard?
The NIO ES8's NOP system is an AI-based automated driving assistance system that was active at the time of the accident. The system failed to detect a stationary highway maintenance vehicle, a static obstacle, which is a known limitation of the system as per the user manual. This failure directly led to a fatal collision, causing the death of the driver. The involvement of the AI system in the development, use, and malfunction stages is clear, and the harm (fatal injury) is directly linked to the AI system's failure. Hence, this event meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件:将依法责任认定

2021-08-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the automatic driving assistance feature of the NIO vehicle, which malfunctioned by failing to detect an obstacle, causing a fatal accident. This meets the definition of an AI Incident because the AI system's malfunction directly led to injury and death, which is a clear harm to a person. The report also highlights concerns about the immaturity and safety risks of current autonomous driving technologies, reinforcing the direct link between the AI system's failure and the harm caused.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件:已开展调查

2021-08-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system, specifically the autonomous driving system of the electric vehicle. The use or malfunction of this AI system has directly led to physical harm (death and injury) and property damage, fulfilling the criteria for an AI Incident. The police investigation indicates the event is being formally examined, but the harm has already occurred due to the AI system's involvement in the accident.
Thumbnail Image

中国蔚来汽车接连车祸 引广告不实批判 - - 财经新闻

2021-08-17
看中国
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the driving assistance system with automatic emergency braking) whose malfunction or failure to act directly led to a fatal accident, causing harm to a person. The article also discusses misleading advertising that caused users to overtrust the system, which is an indirect factor contributing to the harm. This meets the criteria for an AI Incident because the AI system's use and malfunction have directly and indirectly caused injury and death, fulfilling the harm criteria (a).
Thumbnail Image

工信部发布智能网联汽车监管规定

2021-08-17
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The article outlines new regulatory measures for AI-enabled intelligent connected vehicles, which is a governance response to potential risks associated with AI in automotive systems. There is no mention of any realized harm or incident caused by AI, nor a specific hazard event. Therefore, this is best classified as Complementary Information, as it provides important context and updates on societal and governance responses to AI developments in the automotive domain.
Thumbnail Image

使用辅助驾驶时发生车祸谁承担责任?

2021-08-16
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Level 2 assisted driving) that directly led to fatal accidents, which is harm to persons. The discussion about responsibility and the complexity of the AI system indicates the AI system's involvement in the incident. Since harm has occurred and is linked to the AI system's use, this fits the definition of an AI Incident rather than a hazard or complementary information. The article does not focus on future plausible harm or responses but on the actual incidents and their implications.
Thumbnail Image

国产自动驾驶系统失灵致命 中国电动车安全再受质疑

2021-08-17
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and suspected malfunction of an AI-based autonomous driving system (NOP navigation) in a fatal car crash, directly causing harm (death of the driver). This meets the definition of an AI Incident because the AI system's use directly led to injury and death. The article also discusses other related safety issues with the same brand's vehicles, reinforcing the pattern of harm linked to AI-enabled systems. The presence of the AI system is clear, the harm is realized, and the event is not speculative or about potential future harm, so it is not an AI Hazard. The focus is on the incident itself rather than on responses or broader ecosystem context, so it is not Complementary Information. Therefore, the correct classification is AI Incident.
Thumbnail Image

蔚来回应企业家车祸身亡:没有删改数据,没有员工被传唤

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's autopilot feature is an AI system involved in autonomous driving. The fatal accident occurred while this AI system was active, directly leading to the death of the driver. The company's cooperation and denial of data tampering do not negate the fact that the AI system's use is linked to a fatal harm. Hence, this event meets the criteria for an AI Incident due to direct harm to a person's health caused during the use of an AI system.
Thumbnail Image

八个月三起事故沈海高速成蔚来"魔咒"?

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOP assisted driving feature in the NIO ES8 vehicle. The system's use is directly linked to a fatal accident, fulfilling the criteria of an AI Incident where the AI system's malfunction or failure to detect obstacles led to injury and death. The article also references prior similar incidents involving the same AI system, reinforcing the connection between the AI system and harm. Although the investigation is ongoing, the direct causal link between the AI system's failure and the fatal harm is clear. Hence, this is classified as an AI Incident.
Thumbnail Image

蔚来称没有员工被警方传唤:此时已出,对蔚来影响几何?

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigation on Pilot, an AI driver assistance system) whose use directly led to a fatal traffic accident, causing harm to a person. The company's denial of data tampering and employee police summons does not negate the fact that the AI system was active and involved in the incident. The harm (death) has occurred, and the AI system's malfunction or failure to prevent the accident is central to the event. Therefore, this qualifies as an AI Incident.
Thumbnail Image

蔚来称没有员工被警方传唤!出事责任在谁?威马沈晖表态

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system: the L2 autonomous driving assistance system in the NIO vehicle. The system was in use at the time of the fatal accident, and the AI system's involvement directly or indirectly led to harm (death of the driver). The discussion about responsibility and the system's capabilities confirms the AI system's role in the incident. Hence, this is an AI Incident as per the definitions, since the AI system's use led to injury and death, fulfilling harm criterion (a).
Thumbnail Image

驾驶蔚来es8发生事故美一好创始人林文钦去世

2021-08-14
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (the NOP autonomous driving feature) in the vehicle at the time of the accident. The use of this AI system directly led to a fatal incident, which qualifies as injury or harm to a person. Therefore, this is an AI Incident as per the definitions provided.
Thumbnail Image

31岁企业家驾蔚来车祸身亡!行车数据曝光:领航辅助里程占80%

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the NIO Navigate on Pilot driver assistance system) whose use directly led to a fatal accident. The system is an AI-based advanced driver assistance system (Level 2), and the deceased was using it extensively during the trip. The harm (death) has occurred, and the AI system's role is pivotal, either through malfunction, overreliance, or misunderstanding of its capabilities. The ongoing investigation and data disputes further highlight the AI system's central role in the incident. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来汽车频频出事,中国电信大量弃购,盐湖股份巨资倒腾丨正经简报

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of autonomous driving AI systems in NIO vehicles that were active during fatal and severe accidents, directly causing harm to human life and health. This meets the definition of an AI Incident due to the AI system's malfunction or failure leading to injury and death. The other parts of the article about stock market activities do not involve AI systems or harms and are unrelated to the AI Incident classification.
Thumbnail Image

林某钦车祸案代理律师:蔚来工作人员私自接触事故车不合法

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO Pilot automatic driving assistance) active during a fatal accident, which directly led to harm (death). The investigation focuses on whether the AI system met safety standards and whether data was tampered with, which are related to the AI system's use and potential malfunction. The unauthorized access to the vehicle by company staff also raises legal and ethical concerns. Given the direct link between the AI system's operation and the fatal harm, this is classified as an AI Incident.
Thumbnail Image

31岁企业家驾驶蔚来es8遭遇车祸身亡自动驾驶被夸大了吗?

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an advanced driver-assistance system (NOP) that was active during a fatal car crash. The AI system's use is directly linked to the incident, as the assisted driving feature was enabled and the accident occurred under its operation. The article indicates that the system's limitations and potential malfunction or misuse contributed to the harm (death of the driver). Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death. The article also discusses broader regulatory and safety issues but the primary event is the fatal accident involving the AI system's assisted driving function.
Thumbnail Image

蔚来车祸后,特斯拉又因自动驾驶遭美监管机构调查!股价大跌超4%

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of Tesla's Autopilot and NIO's NOP advanced driver-assistance systems. These systems are AI-enabled and perform complex driving tasks such as navigation, lane changes, and adaptive cruise control. The reported crashes, injuries, and death are direct harms linked to the use or misuse of these AI systems. The ongoing official investigations by regulatory authorities further confirm the significance of these harms. Therefore, the event meets the criteria for an AI Incident because the AI systems' use and malfunction have directly led to injury and death, which is harm to persons as defined in the framework.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件:已开展调查,将依法责任认定

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving feature of the NIO vehicle. The use of this AI system directly led to a fatal traffic accident causing death and injury, which qualifies as harm to persons. The police investigation and responsibility determination are ongoing, but the harm has already occurred. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of an AI system in causing injury and death.
Thumbnail Image

林某钦车祸案代理律师:警方传唤蔚来人员因其私自接触事故车辆

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle operating with an AI-based autopilot system. The AI system's use is explicitly mentioned (automatic driving function, NOP mode). The harm (death) has occurred, fulfilling the criteria for an AI Incident. The police investigation into unauthorized access and possible data tampering further indicates the AI system's role in the incident. Hence, the event meets the definition of an AI Incident as the AI system's use and potential malfunction or misuse have directly or indirectly led to harm.
Thumbnail Image

31岁知名企业家驾驶蔚来出车祸去世,警方通报

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of the NIO Pilot (NOP) driver assistance feature, which is an AI-enabled system providing navigation assistance. The use of this system was active during the accident, which directly led to a fatal injury (harm to a person). Although the system is not full autonomous driving, its involvement as a driver assistance system contributing to the accident qualifies it as an AI system whose use led to harm. Therefore, this is an AI Incident due to the direct link between the AI system's use and the fatal accident.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the involvement of an autonomous driving system in a traffic accident causing death and injury. Autonomous driving systems are AI systems that make real-time decisions affecting vehicle operation. The harm (fatality and injury) directly resulted from the use of this AI system. Hence, this is an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

周末消息:31岁创始人驾驶蔚来es8遭遇车祸身亡

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the automatic driving feature of the NIO ES8) that was in use at the time of a fatal car crash, directly causing harm to a person. This fits the definition of an AI Incident because the AI system's use directly led to injury and death. The article does not merely discuss potential risks or future hazards, nor is it a general update or complementary information. Therefore, it qualifies as an AI Incident.
Thumbnail Image

不再嘴硬,李斌和蔚来也要进军“五环外”下沉市场了

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes fatal accidents involving NIO's autonomous driving system, which is an AI system. The harm (deaths) has already occurred and is directly linked to the use and malfunction of the AI system. This meets the definition of an AI Incident because the AI system's malfunction directly caused injury and death. The article also discusses the company's financial and strategic responses but the core event is the fatal accidents caused by the AI system's failure.
Thumbnail Image

特斯拉蔚来们鼓吹的自动驾驶,故事还是事故?

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an advanced driver-assistance system with autonomous navigation features (NOP mode). The system was in use at the time of a fatal accident, directly linking the AI system's use to harm (death of the driver). The article also discusses the overpromising and misunderstanding of the system's capabilities, which contributed to the misuse and accident. This fits the definition of an AI Incident, as the AI system's use directly led to injury or harm to a person. The article does not merely warn of potential harm or discuss responses but reports an actual fatal incident involving AI system use.
Thumbnail Image

智慧芽:蔚来智能驾驶相关专利数占有效专利数的3%

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article references an AI system (NIO's automatic assisted driving navigation) potentially involved in a fatal accident, indicating AI system involvement. However, the harm (death) is not definitively attributed to the AI system; the cause is speculative. The rest of the article focuses on patent data and company plans, which do not describe new harm or plausible future harm. Since the article mainly provides supporting information about AI system development and a possible incident without confirming causation or imminent risk, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

蔚来“辅助驾驶”引争议 多方将共同鉴定

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based assisted driving system that integrates navigation, high-precision maps, and automated driving assistance functions. The accident was caused when the system failed to detect a stationary maintenance vehicle, leading to a fatal collision. This is a direct harm to a person caused by the use and malfunction of an AI system. The involvement of the AI system is explicit and central to the incident. The event meets the criteria for an AI Incident as it involves injury/harm to a person directly linked to the AI system's use and malfunction.
Thumbnail Image

官方通报“蔚来自动辅助驾驶车祸事件”:撞击前方施工作业车辆致亡

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (NIO's automatic driving assistance) during the fatal crash. The harm (death of the driver and injury to another person) has occurred and is directly linked to the use of the AI system in the vehicle. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The ongoing investigation and disputes about data handling do not change the classification but provide context. Hence, the event is best classified as an AI Incident.
Thumbnail Image

蔚来“自动驾驶”又出事了!一知名创业者不幸逝世

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot assisted driving system, which is an AI-enabled driver assistance technology. The use of this system directly led to a fatal traffic accident causing the death of the driver, which constitutes injury or harm to a person. The article discusses the system's role and the accident's circumstances, confirming the AI system's involvement in the harm. Hence, this is an AI Incident as per the definitions provided, since the AI system's use directly led to harm.
Thumbnail Image

请对自动驾驶心存敬畏,请蔚来们对宣传保持克制

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the assisted driving system (NOP) in the NIO vehicle, which is an AI-based driver assistance technology. The use of this system directly led to a fatal traffic accident, causing harm to a person (the driver). The article discusses the malfunction or misuse of the AI system, the legal implications, and the responsibility of the manufacturer. This fits the definition of an AI Incident because the AI system's use directly led to injury and death. The discussion about misleading marketing and the need for caution further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

蔚来高速自动辅助驾驶致死!网友曝特斯拉高速自动驾驶打王者

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The NIO accident involved the use of an AI-based assisted driving system (NOP), which is a form of AI system that influences vehicle control. The fatal crash directly resulted from the use of this system, constituting an AI Incident due to injury and death. The Tesla case, while no accident is reported, shows dangerous misuse of an AI-assisted driving system that could plausibly lead to harm, qualifying as an AI Hazard. Since the fatal accident is a realized harm, the overall classification prioritizes AI Incident. The article clearly links AI system use and malfunction/misuse to harm and plausible future harm.
Thumbnail Image

蔚来事故启示录:被夸大的和被误导的自动驾驶

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's automatic driving assistance system (NOP), an AI system that integrates navigation, high-precision maps, and automated driving functions. The fatal accidents occurred while the AI system was engaged, indicating direct involvement of the AI system's use and limitations in causing harm (death). The article also references similar incidents with Tesla's Autopilot, reinforcing the pattern of AI system malfunction or misuse leading to injury or death. The discussion of regulatory measures further confirms the recognition of these harms. Hence, this is an AI Incident due to realized harm caused by AI system use and malfunction.
Thumbnail Image

突发!美国正式调查特斯拉,新能源车股价跳水!自动驾驶引争议,蔚来、理想紧急发声

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of Tesla's Autopilot and NIO's assisted driving features. The involvement of these AI systems in multiple collisions and a fatal accident indicates direct or indirect harm to persons and property. The ongoing investigations and regulatory scrutiny further confirm the significance of these harms. The presence of AI systems and their role in these incidents meet the criteria for an AI Incident, as the harms have materialized and are linked to the AI systems' use or malfunction.
Thumbnail Image

"狂妄"李斌,蔚来已无敌手?

2021-08-16
华尔街见闻
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's autopilot (NOP) system, which is an AI system for autonomous driving. The fatal accidents involving NIO vehicles with autopilot activated demonstrate direct harm to human life caused by the AI system's use or malfunction. The harm is realized (deaths occurred), and the AI system's involvement is central to the incident. Hence, this is an AI Incident under the framework, as it involves injury or harm to persons directly linked to the AI system's use.
Thumbnail Image

高速自动辅助驾驶事故致死!媒体实测蔚来AEB:测试成绩一言难尽

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system with AEB functionality. The system's malfunction or inadequacy directly led to a fatal accident, fulfilling the criteria for an AI Incident due to injury and death caused by the AI system's use and failure. The article provides evidence of the AI system's role in the harm, and the harm has materialized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

企业家启用辅助驾驶车祸去世,蔚来回应:绝不能把NOP等同于自动驾驶

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as NIO's NOP assisted driving feature, which uses AI to control vehicle navigation and driving tasks on highways. The fatal accident occurred while this AI system was active, directly leading to the driver's death, fulfilling the criteria of harm to a person. The company's response acknowledges the AI system's involvement and clarifies its limitations, confirming the AI system's role in the incident. Therefore, this is an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

过半车程都用蔚来NOP!林文钦7月行车数据曝光,好友:他非常信任

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's NOP, an autonomous driving assistance system) whose use is directly linked to a fatal accident, causing harm to a person. The controversy over the system's capabilities and the company's marketing claims further highlight the AI system's role in the incident. The harm has materialized (death), and the AI system's involvement is central to the event. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

辅助驾驶不是自动驾驶 车企应该明确告知消费者

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP assisted driving) whose use directly led to a fatal traffic accident, causing harm to a person. The article explicitly discusses the AI system's capabilities and limitations, the misunderstanding by consumers about assisted versus autonomous driving, and the resulting fatality. This meets the criteria for an AI Incident because the AI system's use directly caused harm (death), and the incident raises important safety and regulatory concerns. The article does not merely discuss potential future harm or general AI developments but reports on a realized harm linked to AI system use.
Thumbnail Image

蔚来事故致死!爱驰汽车创始人:特斯拉也不到自动驾驶 李想想法初级

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the assisted driving feature) whose malfunction or limitations have directly led to a fatal accident, causing harm to a person. The AI system's involvement is explicit (automatic driving assistance mode), and the harm (death) has occurred. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury or harm to a person.
Thumbnail Image

交警通报蔚来自动驾驶事件:将依法做出责任认定

2021-08-18
星岛环球网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the autonomous driving system of the NIO electric vehicle. The accident caused direct harm to people (death and injury), fulfilling the criteria for an AI Incident. The police investigation and responsibility determination are part of the response but do not change the classification. Therefore, this is an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

私下接触事故车员工被传唤?蔚来官方否认,家属与蔚来各执一词

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI-based advanced driver assistance system that was active during the fatal accident. The accident resulted in the death of the driver, which is a direct harm to a person. The article discusses the possibility that the AI system's failure to recognize the maintenance vehicle caused the crash, and the ongoing investigation involves data extraction from the AI system. This meets the criteria for an AI Incident because the AI system's use and potential malfunction have directly or indirectly led to harm (death).
Thumbnail Image

威马创始人沈晖谈自动驾驶:L2级别责任主体在驾驶员,L4以上责任主体归属主机厂

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous driving technologies (L2 and L4 levels). However, it primarily provides commentary on responsibility allocation and safety strategies rather than reporting a new incident or hazard. The referenced accident is background context, not the main focus, and no new harm or plausible future harm is described here. Therefore, this is Complementary Information as it provides context and expert opinion related to AI incidents but does not itself describe a new AI Incident or AI Hazard.
Thumbnail Image

小鹏销售展示“自动驾驶”,结果追尾前车,客户懵了

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the ACC, an AI-assisted driving feature) whose use during a test drive directly led to a collision causing property damage and potential harm. The incident stems from the use and possible malfunction or misuse of the AI system. The harm is realized (collision and damage), meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a concrete incident involving AI-related harm.
Thumbnail Image

美媒:美国政府启动对特斯拉自动驾驶的正式调查

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system involved in autonomous driving. The investigation and reported accidents demonstrate that the AI system's malfunction or failure to detect emergency vehicles has caused real harm, including injuries and a fatality. The misuse of the system by drivers further contributes to the harm. Therefore, this event qualifies as an AI Incident because the AI system's development, use, or malfunction has directly or indirectly led to harm to people.
Thumbnail Image

理想汽车创始人:“自动驾驶”等术语可能误导消费者

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of advanced driver-assistance and semi-autonomous driving technologies. It discusses a fatal accident involving such a system but does not present new evidence that the AI system malfunctioned or directly caused the harm; rather, it reports on the founder's call for clearer terminology to prevent consumer misunderstanding and references ongoing investigations and past incidents. The main focus is on the potential for misleading marketing and the need for standardized terms, which is a governance and societal response issue. Hence, it does not describe a new AI Incident or AI Hazard but provides complementary information enhancing understanding of AI system risks and regulatory responses.
Thumbnail Image

蔚来辅助驾驶车祸案律师称:技术人员私自接触涉案车辆!蔚来回应了

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (NOP, an AI-assisted driving system) whose use during the accident is central. The harm (fatal injury) has occurred, and the AI system's limitations contributed to the accident. The unauthorized access and possible tampering with vehicle data relate to the investigation but do not negate the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

蔚来高速自动辅助驾驶致死!有EC6车主曝光自己惊魂一刻

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely NIO's Level 2 automated driving assistance (NOP navigation assist), which is an AI system designed to assist driving by perceiving and responding to road conditions. The fatal accident and near-miss incident are directly linked to the AI system's failure to detect hazards and respond appropriately, leading to injury and death (harm to persons). The AI system's malfunction and the risks of overreliance on it are central to the harm described. Therefore, this qualifies as an AI Incident under the definition of an event where AI system use or malfunction has directly or indirectly led to injury or harm to persons.
Thumbnail Image

美国启动对特斯拉自动驾驶的调查:难以识别停放在路边的紧急车辆

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Tesla's autonomous driving system. The investigation is due to the system's failure to correctly identify emergency vehicles, which has directly contributed to accidents causing injuries and a fatality. This meets the criteria for an AI Incident as the AI system's malfunction has directly led to harm to persons. The presence of real harm (injuries and death) linked to the AI system's use confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

蔚来曾宣称比人驾驶更安全!林文钦购车前曾多次询问自动驾驶

2021-08-18
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—an autonomous driving system in a NIO vehicle—that was active during a fatal accident. The harm (death of a person) directly resulted from the use of this AI system. The event meets the criteria for an AI Incident because the AI system's use directly led to harm to a person. The ongoing investigation and disputes about the vehicle's handling post-accident do not change the classification but provide context. Therefore, this is classified as an AI Incident.
Thumbnail Image

蔚来声明:公司无任何删改数据行为,也无员工被警方传唤

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves a fatal traffic accident with a NIO ES8, an AI-equipped vehicle, leading to a person's death (harm to health). The company's data extraction and investigation relate to the AI system's role in the incident. The harm has occurred, and the AI system's involvement is central to understanding the incident. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

31岁企业家车祸离世蔚来:不能把NOP等同于自动驾驶

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP, an advanced driver assistance system with AI components) actively used during a fatal car crash. The harm (death of a person) has occurred and is directly linked to the use of the AI system in the vehicle. The system's limitations and the accident's circumstances suggest the AI system's role in the incident, fulfilling the criteria for an AI Incident. The article also discusses the system's capabilities and warnings, reinforcing that the AI system's use and potential malfunction or misuse contributed to the harm. Therefore, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

特斯拉、蔚来,目前股价跌幅尚小

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous driving (Tesla Autopilot, NIO's NOP system) that have directly caused multiple fatal accidents, constituting injury and death to persons. The involvement of AI in these accidents is clear and central. The ongoing investigations and accusations of data tampering further emphasize the AI systems' role in the incident. The harms are realized, not hypothetical, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports actual harm caused by AI system failures and misuse.
Thumbnail Image

哪吒汽车转发周鸿“自动驾驶”观点:行业术语应变成简单概念

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous driving technologies, but it does not describe an incident where AI use or malfunction directly or indirectly caused harm. It also does not present a credible risk of future harm from AI systems in this context. The main content is about industry and media responses to existing concerns and the need for better terminology to prevent misunderstanding. Therefore, it fits the category of Complementary Information, as it provides context and governance-related discussion rather than reporting a new AI Incident or Hazard.
Thumbnail Image

理想汽车创始人李想呼吁媒体与机构:统一自动驾驶的中文名

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (L2-level autonomous driving assistance) being used in a vehicle that was involved in a fatal accident, resulting in the death of a person. This meets the criteria for an AI Incident as the AI system's use directly led to harm to a person. The call for standardized terminology is complementary but the core event is the fatal accident involving AI-assisted driving, which is a realized harm.
Thumbnail Image

到底是什么误导了消费者

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an assisted driving system, which is a form of AI used for vehicle navigation and control assistance. The accident caused serious harm (personnel injury and death), which directly relates to the AI system's use and the misunderstanding of its capabilities by consumers. The article points out that the system's limitations were not clearly communicated, leading to overreliance or misuse, which is an indirect cause of the harm. Therefore, this qualifies as an AI Incident because the AI system's use and the associated miscommunication directly contributed to the harm.
Thumbnail Image

31岁创业者驾驶蔚来车祸去世,电动车企过度宣传混淆“辅助驾驶”和“自动驾驶”?

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the NIO Pilot driving assistance system) that was active at the time of a fatal accident, causing direct harm (death and property damage). The system is an AI-based driver assistance tool, not full autonomous driving, but the confusion in marketing and user understanding likely contributed to misuse or overreliance, which is an indirect cause of the harm. This fits the definition of an AI Incident because the AI system's use directly or indirectly led to injury and harm to a person. The article also discusses the broader issue of misleading AI system marketing, but the primary focus is on the fatal incident linked to the AI system's use.
Thumbnail Image

知名企业家驾驶蔚来ES8遭遇车祸身亡 最新官方声明来了

2021-08-17
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology) whose use directly led to a fatal car accident, causing harm to a person. The involvement of the AI system is explicit and central to the incident. The harm is realized (death), meeting the criteria for an AI Incident. The investigation into data tampering is related but does not change the classification. Therefore, this event is classified as an AI Incident.
Thumbnail Image

家属质疑篡改数据!蔚来派技术员私自接触事故车,或涉嫌刑事犯罪

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident where the NIO vehicle's AI-based autonomous driving system failed to detect an obstacle and did not brake automatically, resulting in the driver's death. The AI system was explicitly in use and its malfunction directly caused harm. Additionally, the manufacturer's alleged unauthorized tampering with vehicle data raises concerns about evidence integrity but does not negate the AI system's role in the incident. The harm is realized and directly linked to the AI system's use and malfunction, meeting the criteria for an AI Incident.
Thumbnail Image

蔚来技术人员私自接触致死事故数据被传唤

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's Navigation on Pilot (NOP) is an AI system providing autonomous driving capabilities. The fatal accident occurred while the AI system was active, directly causing harm (death). The unauthorized access by technical staff to the vehicle's accident data, potentially involving tampering, is related to the AI system's use and investigation. This meets the criteria for an AI Incident as the AI system's use directly led to injury or harm to a person, and the event involves the AI system's malfunction or misuse. The ongoing police investigation and potential legal consequences further underscore the seriousness of the incident.
Thumbnail Image

美一好创始人林文钦驾驶蔚来ES8发生车祸逝世,生前曾创立多个品牌

2021-08-15
和讯网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's autopilot feature is an AI system that controls driving functions. The accident occurred while this AI system was active, leading to a fatality, which constitutes injury or harm to a person. Although the exact cause is still under investigation, the AI system's involvement in the accident and the resulting death qualifies this as an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

又摊上大事了!美国启动对特斯拉自动驾驶的调查

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
Tesla's Autopilot is an AI system used for autonomous driving. The reported accidents involving injuries and a fatality linked to the system's failure to detect emergency vehicles demonstrate direct harm caused by the AI system's malfunction or limitations. The investigation by a government safety agency confirms the seriousness and reality of these harms. Hence, this event meets the criteria for an AI Incident due to the AI system's involvement in causing injury and death.
Thumbnail Image

蔚来辅助驾驶车祸案律师称技术人员私自接触涉案车辆 蔚来:正内部了解情况

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car accident where an AI-assisted driving system (NOP) was active. The AI system's role is central as it was controlling the vehicle on the highway, and the accident involved failure to respond to a static obstacle, which the system's manual states it cannot handle, requiring driver intervention. The death of the driver constitutes harm to a person. Additionally, the potential unauthorized access and possible tampering with vehicle data by the AI system's technical personnel could impact the investigation and liability, indicating issues related to the AI system's use and post-incident handling. This meets the criteria for an AI Incident because the AI system's use directly led to harm, and the event involves investigation of possible misconduct related to the AI system's data.
Thumbnail Image

揪心!蔚来称没有员工被警方传唤 31岁企业家林文钦不幸罹难

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NIO's autonomous driving feature) during the accident that caused a fatality. This constitutes direct harm to a person due to the AI system's involvement. The company's cooperation with authorities and data extraction efforts are part of the investigation but do not negate the fact that the AI system's use led to a fatal incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

林某钦车祸案代理律师:警方传唤蔚来人员 因其私自接触事故车辆

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving an AI system (the vehicle's autopilot/autonomous driving feature). The AI system's use is directly linked to the harm (death). The unauthorized access and possible tampering with vehicle data by the AI system's manufacturer personnel is part of the investigation but does not negate the AI system's involvement in the incident. Therefore, this qualifies as an AI Incident due to direct harm to a person caused by the AI system's use and the ongoing investigation into potential data tampering related to the AI system.
Thumbnail Image

威马汽车创始人就“如何看待辅助驾驶普及但事故频发”发表看法

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems related to assisted and autonomous driving (L2 and L4 levels), which are AI systems by definition. However, it does not report any specific incident or accident caused by these AI systems, nor does it describe any realized harm or a near-miss event. Instead, it provides opinions and explanations about responsibility and safety considerations, which is complementary information enhancing understanding of AI deployment and risks but does not itself describe an AI Incident or AI Hazard.
Thumbnail Image

技术人员私自接触涉案车辆被传唤?蔚来最新回应:假的!公司没有删改任何数据的行为

2021-08-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an autonomous or semi-autonomous vehicle with data recording capabilities. However, the article does not report any realized harm caused by AI malfunction or misuse, nor does it describe a plausible future harm stemming from AI system use. The company's denial of data tampering and cooperation with authorities suggests no incident has been confirmed. The article mainly provides information on the investigation and company statements, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

蔚来车主或因NOP去世 "自动驾驶"怎么

2021-08-16
newcar.xcar.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving system, which is an AI-based navigation and driving assistance technology. The system's failure to detect a stationary vehicle and prevent a collision directly led to a fatal accident, causing harm to a person. This meets the definition of an AI Incident, as the AI system's malfunction or limitations directly caused injury or harm to a person. The article also discusses regulatory responses and safety considerations, but the primary focus is on the incident itself and its consequences.
Thumbnail Image

早报 | 苹果秋季或举办三场发布活动 / 蔚来回应企业家车主因车祸去世事件 / 张哲瀚社交账号被封

2021-08-16
爱范儿
Why's our monitor labelling this an incident or hazard?
The NIO ES8's Navigate on Pilot feature is an AI-based driver assistance system that infers from sensor inputs to control vehicle navigation. The fatal accident occurred while this system was engaged, directly linking the AI system's use to harm (death). The article explicitly mentions the AI system's involvement and the resulting fatality, fulfilling the criteria for an AI Incident. Other parts of the article do not describe AI-related harm or hazards. Hence, the classification is AI Incident.
Thumbnail Image

蔚来:无任何删改数据行为/无员工被传唤

2021-08-16
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing investigation into a traffic accident involving a NIO vehicle, where data from the vehicle (likely including AI system data such as battery status or autonomous driving logs) is being examined. While there are allegations of unauthorized data manipulation, the company denies these claims. There is no confirmed incident of AI system malfunction or misuse causing harm, nor is there evidence that the AI system's involvement directly or indirectly led to injury or other harms beyond the accident itself. The event is primarily about the investigation process and clarifications, not about a new AI-related harm or a plausible future harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides updates and clarifications related to a prior incident involving AI systems.
Thumbnail Image

早报|多位汽车品牌创始人为「自动驾驶」发声/Android 12 将能用微笑控制手机/贝索斯旗下蓝色起源起诉 NASA

2021-08-17
爱范儿
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based assisted or autonomous driving systems (e.g., NOP automatic navigation assistance, Tesla Autopilot) that have been involved in accidents causing injuries and death, which qualifies as AI Incidents under the framework. The NHTSA investigation covers many vehicles and confirms harm (17 injuries, 1 death). The fatal accident involving the NIO ES8 with the navigation assistance system also indicates realized harm. The discussions by founders about terminology and risks provide context but do not themselves constitute incidents. The Android 12 facial gesture control is an AI system feature but no harm or plausible harm is described, so it is unrelated or complementary at best. Other news items are unrelated to AI incidents or hazards. Hence, the classification is AI Incident due to the realized harms from autonomous driving AI systems.
Thumbnail Image

[科技新闻]车企莫把"辅助驾驶"夸大为"自动驾驶"

2021-08-16
mitbbs.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of assisted driving technologies (Level 2), which are not fully autonomous but use AI to assist driving. The article does not report a confirmed incident causing harm but emphasizes the plausible risk of harm due to overreliance and misleading marketing, which could lead to accidents or injuries. This fits the definition of an AI Hazard, as the development, use, or malfunction of these AI systems could plausibly lead to harm. The article also discusses regulatory and societal responses, but the main focus is on the potential risk rather than an actual incident or complementary information about past incidents.
Thumbnail Image

[科技新闻]辅助驾驶系统存在重大"识别盲区?76.5万辆特斯拉遭调查

2021-08-17
mitbbs.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as Tesla's Autopilot, an advanced driver-assistance system that uses AI for perception and control. The investigation is triggered by multiple serious accidents, including injuries and a death, where the AI system was active and implicated. The harm is direct and materialized, fulfilling the criteria for an AI Incident. The article details the system's limitations and failures in recognizing certain obstacles, which have led to crashes. This is not merely a potential risk but an ongoing safety issue with documented harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

[科技新闻]从特斯拉到蔚来:如何逃离自动驾驶"死亡魔咒"?

2021-08-17
mitbbs.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (advanced driver-assistance systems with AI perception and decision-making capabilities) whose use has directly led to multiple serious accidents and at least one death. The harm is realized and directly linked to the AI system's failure to detect static obstacles and the resulting collisions. The article also discusses the AI systems' limitations, user warnings, and regulatory responses, but the primary focus is on the incidents and harms caused. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

有了"自动驾驶" 我们就不用看路了?

2021-08-15
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO ES8's driver assistance/autonomous driving system) whose malfunction or limitations directly led to a fatal traffic accident, causing harm to a person. The AI system's failure to detect and respond appropriately to a highway maintenance vehicle is a direct causal factor in the incident. This fits the definition of an AI Incident, as the AI system's use and malfunction have directly led to injury and death. The article also discusses regulatory and responsibility issues but the core event is a realized harm caused by AI system failure.
Thumbnail Image

31岁知名企业家驾驶蔚来不幸事故身亡 据传事发时车辆为开启NOP状态

2021-08-15
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot, an advanced driver assistance system with AI capabilities) whose use directly led to a fatal traffic accident, causing harm to a person. The AI system was active at the time of the crash, and the harm (death) is realized and significant. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. Although the investigation is ongoing, the information provided is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来工作人员被传唤 因私自接触事故车辆

2021-08-16
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The vehicle involved is a NIO ES8, an electric SUV known to have advanced driver assistance and autonomous features, which are AI systems. The accident description indicates the AI system did not perform expected safety functions (automatic deceleration or avoidance), directly leading to the fatal crash. The unauthorized interference by NIO technical staff with the accident vehicle's data or systems could impact the investigation but does not negate the AI system's role in the incident. The harm (death) has occurred, and the AI system's malfunction or failure to act is a contributing factor, meeting the criteria for an AI Incident.
Thumbnail Image

31岁企业家开蔚来自动驾驶出车祸去世,警方通报:已开展调查

2021-08-18
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (NIO's autonomous driving system) whose use directly led to a fatal crash and injury, fulfilling the criteria for an AI Incident due to harm to persons. The police investigation indicates the event is being formally examined, but the harm has already occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

实测17款自动辅助驾驶系统主流车 均发现安全…

2021-08-15
成都全搜索
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as automatic driving assistance systems with functionalities such as lane keeping and adaptive cruise control, which are AI systems by definition. The fatal accident directly resulted from the use of such a system, indicating harm to a person caused by the AI system's malfunction or misuse. The subsequent tests confirm systemic vulnerabilities across multiple brands, showing that these AI systems can be tricked into unsafe states, which is a direct safety hazard. The article also discusses the misleading marketing that causes overreliance on these systems, contributing indirectly to harm. Given the death caused and the systemic safety issues demonstrated, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“牛市旗手”能否走出万亿龙头?

2021-08-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the automatic driving feature (NOP pilot) of the NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing injury and death, which fits the definition of an AI Incident. The article also references the broader context of autonomous driving technology's safety challenges, reinforcing the AI system's role in the harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

31岁企业家启动领航辅助功能后车祸离世,蔚来的回应来了……

2021-08-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving feature, which is an AI-based system integrating navigation and automated driving functions. The use of this AI system directly led to a fatal traffic accident causing harm to a person, fulfilling the criteria for an AI Incident. The harm is realized (death of the driver), and the AI system's involvement is central to the event. Although the investigation is ongoing, the direct link between the AI system's use and the fatal crash is clear. Therefore, this event is classified as an AI Incident.
Thumbnail Image

企业家“萌剑客”车祸去世 蔚来回应正在调查

2021-08-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (NOP, an AI-based driver assistance system) during driving, which directly led to a fatal accident causing harm to a person. The AI system's malfunction or limitations in its operation contributed to the harm. This fits the definition of an AI Incident as the AI system's use directly led to injury or death. The company's investigation confirms the AI system's involvement, reinforcing the classification as an AI Incident.
Thumbnail Image

专家称辅助驾驶不是自动驾驶,过度宣传易造成错误认知

2021-08-18
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—an advanced driver assistance system (NOP)—which was active during a fatal car accident. The system's use and the resulting harm (death of a driver) meet the criteria for an AI Incident, as the AI system's malfunction or misuse directly contributed to injury or harm to a person. The article also discusses the broader implications of overpromising AI capabilities and the need for better safety measures, but the core event is a realized harm linked to AI system use.
Thumbnail Image

每经14点丨“国家队”最新持仓动向曝光

2021-08-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's assisted driving system) that was active during a fatal accident, causing harm to a person. The involvement of the AI system in the accident and the subsequent unauthorized access to vehicle data by technical personnel are directly related to the harm. This meets the criteria for an AI Incident as the AI system's use has directly led to injury or harm to a person.
Thumbnail Image

31岁企业家车祸离世,蔚来的回应来了……

2021-08-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP assisted driving system) actively used during a traffic accident that caused a fatality, which is a direct harm to a person. The AI system's involvement is explicit and central to the incident. The harm (death) has already occurred, and the AI system's role is pivotal in the chain of events leading to the harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来辅助驾驶车祸案律师称技术人员私自接触涉案车辆:蔚来回应正内部了解情况

2021-08-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car accident involving a vehicle equipped with an AI-assisted driving system (NOP). The AI system was active at the time of the accident, indicating its direct involvement. The subsequent unauthorized access and possible tampering with vehicle data by the manufacturer's technical personnel further implicate the AI system's role in the incident and raise legal and ethical concerns. These factors meet the criteria for an AI Incident, as the AI system's use and potential malfunction or misuse have directly led to harm (death) and legal investigation.
Thumbnail Image

知名企业家“萌剑客”车祸去世 车型蔚来ES8

2021-08-14
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing assisted driving capabilities. The driver was using this AI system at the time of the fatal crash, which directly caused harm (death). Although the NIO brand clarifies that NOP is not full autonomous driving, it is still an AI system involved in vehicle operation. The fatality confirms realized harm. Therefore, this event meets the criteria for an AI Incident due to the AI system's involvement in causing injury and death.
Thumbnail Image

蔚来事故进展:ES8车主车祸遇难 蔚来否认删改数据

2021-08-18
财经网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the NIO Pilot automatic driving assistance system with NOP functionality. The system was active during the accident, which directly caused the death of the driver, fulfilling the harm criterion (a) injury or harm to health of a person. The controversy about data handling and ongoing investigation further supports the significance of the incident. This is not merely a potential risk but a realized harm caused by the AI system's use, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

理想汽车创始人李想呼吁媒体与机构:统一自动驾驶

2021-08-16
财经网
Why's our monitor labelling this an incident or hazard?
The assisted driving feature (NOP) is an AI system providing automated driving assistance. The fatal accident occurred while the system was engaged, indicating the AI system's involvement in the incident. The harm (death) has occurred, fulfilling the criteria for an AI Incident. The article also includes a call for clearer terminology to prevent misunderstanding, but the primary focus includes the fatal accident linked to AI system use, making it an AI Incident rather than merely complementary information or a hazard.
Thumbnail Image

威马汽车创始人沈晖就“如何看待辅助驾驶普及但事

2021-08-16
财经网
Why's our monitor labelling this an incident or hazard?
The assisted driving system (NOP) is an AI system providing L2-level assistance, where the driver remains responsible but the system influences vehicle control. The fatal accident occurred while the system was active, directly causing harm (death). This fits the definition of an AI Incident as the AI system's use directly led to injury or harm to a person. The discussion about responsibility and system levels supports the context but does not negate the incident classification. Therefore, this event is classified as an AI Incident.
Thumbnail Image

交警回应蔚来“自动驾驶门”:发生事故的是平原路段

2021-08-18
财经网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle operating with an AI-based autonomous driving system (NOP navigation mode). The AI system's use directly led to harm (death of the driver), meeting the definition of an AI Incident. The involvement of the AI system is explicit, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

周鸿祎:L2到L5都是自娱自乐 建议L3改名为“高级

2021-08-17
财经网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP advanced driver assistance) actively used by the driver at the time of a fatal accident, which directly led to harm (death). The AI system's malfunction or limitations in this context contributed to the incident. The article explicitly links the AI system's use to the accident and the resulting fatality, fulfilling the criteria for an AI Incident. The discussion about naming conventions and marketing claims around autonomous driving levels is complementary context but does not negate the incident classification. Therefore, this is an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件

2021-08-18
财经网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an autonomous driving system (AI system) in the NIO vehicle involved in a fatal crash. The AI system's use is directly linked to the incident, which caused injury and death, fulfilling the criteria for an AI Incident. The investigation and responsibility determination indicate the AI system's role in the harm caused. Therefore, this qualifies as an AI Incident due to the direct harm resulting from the use of an AI system in autonomous driving.
Thumbnail Image

蔚来声明:公司没有任何删改数据的行为

2021-08-16
财经网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system insofar as the vehicle likely contains AI components (e.g., battery management, data logging) relevant to the accident investigation. However, the article does not confirm any realized harm caused by AI malfunction or misuse, nor confirmed data tampering. The potential for harm or legal consequences exists if data tampering is proven, but this remains under investigation. Therefore, this is best classified as Complementary Information providing updates on an ongoing investigation related to an AI system, rather than an AI Incident or AI Hazard.
Thumbnail Image

深度 | 蔚来“自动驾驶”车祸后,为什么如此收到关注?

2021-08-17
财经网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO Pilot with NOP feature) used in driving assistance, which directly contributed to a fatal car accident causing the death of the driver. The system's malfunction or limitations in perception and decision-making are discussed as possible causes. The harm (death) is realized and directly linked to the AI system's use. The article also references regulatory responses emphasizing safety in AI-driven automotive systems. Hence, this is an AI Incident as per the definitions, since the AI system's use directly led to injury/harm to a person.
Thumbnail Image

蔚来汽车:Navigate on Pilot(NOP)领航辅助不是自动驾驶

2021-08-15
财经网
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing driver assistance (partial automation). The accident occurred while the system was engaged, leading to a fatality, which is harm to a person. This meets the criteria for an AI Incident because the AI system's use directly led to injury or death. Although the company states NOP is not full autonomous driving, it is still an AI system whose malfunction or misuse can cause harm. Hence, this event is classified as an AI Incident.
Thumbnail Image

蔚来车主自动驾驶车祸行驶数据公布 整个行程过程只出现一次急加速现象

2021-08-16
财经网
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI-based driving assistance system that was active during the trip. The accident caused the death of the driver, which is a direct harm to a person. Although the company states that NOP is not full autonomous driving, it is an AI system providing automated assistance. Therefore, this qualifies as an AI Incident because the use of the AI system directly led to injury or harm to a person.
Thumbnail Image

李想点评蔚来自动驾驶辅助车祸 蔚来前公关总监犀利回应

2021-08-17
财经网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a fatal traffic accident involving a vehicle operating in an automatic driving assistance mode, which is an AI system. The death of the driver is a direct injury caused by the use of this AI system, fulfilling the criteria for an AI Incident. The discussion about naming conventions for autonomous driving levels is complementary but does not overshadow the primary incident. The unrelated allegations about mercury in seats do not involve AI systems or AI-related harm. Therefore, the event is classified as an AI Incident due to the fatal accident linked to the AI driving assistance system.
Thumbnail Image

深度 | 蔚来车主林文钦车祸背后:宣传误导与L2级自动驾驶技术缺陷

2021-08-18
财经网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the L2-level assisted driving system (NIO Pilot) that was active during the fatal crash. The system's malfunction in perceiving a stationary vehicle on the highway directly contributed to the collision and death, fulfilling the criteria for harm to a person. Additionally, the article discusses the overtrust and misleading marketing of the AI system's capabilities, which indirectly contributed to the harm by encouraging unsafe reliance on the technology. Therefore, this is an AI Incident due to the realized harm caused by the AI system's malfunction and its role in the accident.
Thumbnail Image

警方传唤蔚来技术人员因私自接触涉案车辆进行操作?

2021-08-16
财经网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car accident where the NIO Pilot automatic driving assistance system was active, indicating AI system involvement. The death of the driver constitutes harm to a person. The unauthorized access and possible tampering with vehicle data by NIO technical staff is related to the AI system's use and investigation, further linking the AI system to the incident. Given the realized harm (death) and the AI system's role, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“辅助”二字,卖车的说清楚,买车的弄明白

2021-08-17
浙江在线
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (automatic driving assistance) whose use has directly led to fatal accidents, constituting injury and harm to persons. The article explicitly links the accidents to the activation and reliance on these AI-assisted driving features, highlighting the risks of overreliance and possible misleading communication by manufacturers. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction or misuse.
Thumbnail Image

安全性受质疑,美国政府正式调查特斯拉!靠谱的“辅助驾驶”还要等多久?

2021-08-17
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Tesla's Autopilot and NIO's assisted driving systems) whose use has resulted in multiple accidents causing injuries and at least one death. The U.S. National Highway Traffic Safety Administration (NHTSA) has launched a formal investigation covering a large number of vehicles, indicating the seriousness and direct link between the AI system's operation and harm. The article details actual harm (injuries and death) caused by the AI systems' malfunction or failure to prevent collisions, fulfilling the criteria for an AI Incident. The discussion about future expectations and public concerns supports the context but does not change the classification from Incident to Hazard or Complementary Information.
Thumbnail Image

31岁企业家开蔚来撞车离世,你还敢用“辅助驾驶”吗?

2021-08-16
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (assisted driving/autonomous driving features) that directly led to a fatal injury, fulfilling the criteria for an AI Incident. The assisted driving system's malfunction or limitations (e.g., inability to respond to static obstacles) and the driver's reliance on it contributed to the fatal crash. The article explicitly connects the AI system's use to the harm (death), and discusses the broader implications and safety concerns of such AI systems in vehicles, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家“自动驾驶”蔚来车祸去世,安全的自动驾驶还有多远?

2021-08-15
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO's Navigate on Pilot assisted driving system) whose operation directly led to a fatal traffic accident, causing injury and death. The system's malfunction or limitations in handling complex driving scenarios contributed to the harm. The article explicitly links the AI system's use to the incident and discusses the implications for safety and regulation. Therefore, this is an AI Incident as the AI system's use directly caused harm to a person.
Thumbnail Image

美国有关部门将对特斯拉汽车自动驾驶系统的安全性展开调查

2021-08-17
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
Tesla's autonomous driving system is an AI system involved in controlling vehicle navigation and operation. The reported accidents, including collisions with stationary vehicles resulting in injuries and a death, demonstrate direct harm caused by the AI system's use or malfunction. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's operation.
Thumbnail Image

蔚来ES8卷入致命车祸

2021-08-16
多维新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot automated driving assistance system, which uses AI for perception and decision-making to assist driving. The fatal accident occurred while the AI system was active, and its failure to detect and respond appropriately to a highway maintenance vehicle directly led to the driver's death. This constitutes direct harm caused by the AI system's malfunction or limitations. The incident fits the definition of an AI Incident because it involves injury to a person resulting from the use of an AI system. The controversy over the system's marketing and user misunderstanding further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

企业家车祸身亡 律师称蔚来私自接触涉案车辆

2021-08-16
早报
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the NOP assisted driving system) whose use is linked to a fatal accident, fulfilling the criteria for an AI Incident due to harm to a person. The system's inability to detect static obstacles and the requirement for driver vigilance are noted, indicating a limitation or malfunction in the AI system's operation. Additionally, the alleged unauthorized tampering with vehicle data by the company's technicians could impact the investigation and accountability, further underscoring the AI system's role in the incident. Therefore, this is not merely a hazard or complementary information but a realized incident involving AI.
Thumbnail Image

福建莆田交警通报蔚来自动驾驶事故

2021-08-18
早报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly referenced as the assisted driving system in the NIO vehicle, which is plausibly linked to the cause of a fatal accident. The harm (death and injury) has occurred, fulfilling the criteria for an AI Incident. The investigation and regulatory context further support the AI system's central role in the incident. Hence, it is not merely a hazard or complementary information but a confirmed incident involving AI-related harm.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡,行车数据披露

2021-08-16
杭州网
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically the NIO Pilot and NOP autonomous driving assistance features, which are AI-based systems providing Level 2 and Level 3 driving assistance. The fatal accident occurred while these AI systems were in use, indicating their involvement in the incident. The harm is direct and severe (death of the driver). Although the exact cause is under investigation, the AI system's use is a contributing factor to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person.
Thumbnail Image

31岁企业家车祸去世 蔚来回应:领航辅助不是自动驾驶

2021-08-15
华商网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Navigate on Pilot (NOP), which is an AI-based driver assistance system that controls vehicle navigation and cruise functions. The fatal accident occurred while this system was active, indicating the AI system's involvement in the harm. The harm is direct (death of a person) and linked to the AI system's use. The company's clarification that NOP is not full autonomous driving does not negate the AI system's role in the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡 行车数据披露

2021-08-16
华商网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the NIO ES8's automated driving functions (NOP and NIO Pilot), which are AI-based driving assistance systems with Level 2 and Level 3 autonomy. The fatal accident occurred while these AI systems were active, directly linking the AI system's use to the harm (death of the driver). The article discusses the AI system's role, the data from the system, and the ongoing investigation into the accident cause. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to a person.
Thumbnail Image

车企莫把“辅助驾驶”夸大为“自动驾驶”| 新京报社论

2021-08-15
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an assisted driving AI system (NOP), which was active at the time of a fatal car accident. The harm (death of a person) has occurred and is directly linked to the use of this AI system. The article emphasizes the misuse or misunderstanding of the AI system's capabilities, which contributed to the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury or harm to a person. The discussion of regulatory responses and standards is complementary but secondary to the primary incident of harm.
Thumbnail Image

莆田交警通报"年轻CEO驾蔚来身亡"

2021-08-18
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the NIO ES8's NOP feature is an AI-based assisted driving system. The use of this AI system directly relates to the fatal accident, which caused harm to a person (death). Although the AI system is described as an assistance feature rather than full autonomy, its use is a contributing factor in the incident. Therefore, this qualifies as an AI Incident due to the direct harm caused linked to the AI system's use.
Thumbnail Image

事因自动驾驶?能否尽快提供事故车辆的云端数据?事故谁之责?车企宣传混淆概念? [全文]

2021-08-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an AI-enabled advanced driver-assistance system) whose use directly led to a fatal traffic accident, causing harm to a person. The article discusses the malfunction or failure of the AI system to prevent the collision, the ambiguity in marketing the system as 'automatic driving' versus 'assisted driving,' and the unresolved responsibility for the accident. These factors meet the criteria for an AI Incident because the AI system's use is directly connected to the harm (death) and the incident is under investigation with significant societal and legal implications.
Thumbnail Image

蔚来又出车祸了!8月14日晚,一个认证为“美一好”的公众号发布讣告称,美一好品牌管理公司创始人林文钦先生,于2021年8月12日下午2时驾驶蔚来ES8汽车启用自动... [全文]

2021-08-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The NIO ES8's 'NOP' mode is an AI-enabled driving assistance system that controls vehicle navigation and driving tasks. The accident occurred while this AI system was active, leading to a fatal injury. This constitutes an AI Incident because the AI system's use directly led to harm (death) due to a traffic collision. The article focuses on the incident and the controversy around the AI system's role, meeting the criteria for an AI Incident.
Thumbnail Image

汽车"财"之道・要闻汇总【8月16日】

2021-08-16
金融界网
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear: the Navigate on Pilot (NOP) driver assistance system is an AI-enabled system providing automated driving assistance. The fatal accident involving a driver using this system directly caused harm (death). The article explicitly links the accident to the use of this AI system and discusses the risks of overreliance on such technology. The company's response and expert commentary further confirm the AI system's role in the incident. Hence, this is an AI Incident due to direct harm caused by the use of an AI system in an automotive context.
Thumbnail Image

威马汽车创始人沈晖就"如何看待辅助驾驶普及但事故频发"发表看法

2021-08-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly discusses AI systems in the form of L2 and L4 autonomous driving technologies, which are AI systems by definition. The discussion centers on the use and safety of these AI systems, including the potential for accidents (harm) associated with their deployment. However, the event is a commentary or opinion on the state and safety of AI-assisted driving rather than reporting a specific incident or harm caused by these systems. There is no direct or indirect harm reported, nor a specific hazard event described. Therefore, this is best classified as Complementary Information, providing context and insight into AI system use and safety considerations without reporting a new incident or hazard.
Thumbnail Image

又是"自动驾驶"的锅?蔚来车祸后,特斯拉又遭美监管机构调查!股价大跌超4%

2021-08-17
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of Tesla's Autopilot and NIO's NOP driver-assistance systems. These systems are AI-based, providing semi-autonomous driving capabilities. The reported crashes, injuries, and death are directly linked to the use or misuse of these AI systems, fulfilling the criteria for harm to persons. The ongoing official investigations by NHTSA and public scrutiny further confirm the significance of these harms. The article does not merely discuss potential risks but reports actual incidents with realized harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

交警通报蔚来自动驾驶事件!特斯拉因自动驾驶遭美监管机构调查,自动驾驶至暗时刻?

2021-08-18
金融界网
Why's our monitor labelling this an incident or hazard?
The NIO accident involved the use of an AI system (automatic driving feature) that directly caused a fatal crash, meeting the criteria for an AI Incident due to harm to persons. The Tesla investigation involves confirmed crashes linked to its AI driving assistance system causing injuries and death, also an AI Incident. The article's main content centers on these realized harms caused by AI systems in autonomous driving, not just potential risks or general discussion. Therefore, the event is classified as an AI Incident.
Thumbnail Image

31岁企业家疑因"自动驾驶"致交通事故去世!蔚来:正在调查

2021-08-15
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the automatic driving feature of the NIO ES8 vehicle) that was active during a fatal traffic accident, leading to the death of a person. This meets the definition of an AI Incident because the AI system's use directly led to harm (death). The article explicitly mentions the use of the AI system and the resulting fatality, which is a clear injury to a person. Although the investigation is ongoing, the connection between the AI system's use and the harm is sufficiently established for classification as an AI Incident. The event is not merely a potential hazard or complementary information, as harm has already occurred.
Thumbnail Image

蔚来称未篡改数据、无员工被警方传唤,李想、沈晖就自动驾驶纷纷发声

2021-08-16
金融界网
Why's our monitor labelling this an incident or hazard?
The NIO ES8's Navigate On Pilot is an AI-based driver assistance system that automates certain driving tasks. The fatal accident occurred while this system was engaged, directly linking the AI system's use to the harm (death) of a person. The article details the investigation into the accident, including data extraction from the vehicle's AI system and allegations of data tampering, which NIO denies. The harm is realized and directly connected to the AI system's use, fulfilling the criteria for an AI Incident. The discussion about the levels of autonomous driving and public statements further contextualize the incident but do not change the classification.
Thumbnail Image

突发:美国政府对特斯拉自动驾驶系统启动正式调查

2021-08-16
金融界网
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot system is an AI system designed to assist driving. The reported crashes involving Autopilot hitting stationary emergency vehicles demonstrate that the AI system's malfunction or failure to correctly identify hazards has directly caused harm, including fatalities. The U.S. government's investigation confirms the significance and reality of these harms. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

又一里程碑!千方科技牵头产学研共建车路协同自动驾驶北京市工程研究中心(筹)

2021-08-18
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems for autonomous driving and vehicle-road collaboration, which are explicitly mentioned. However, the article focuses on the formation of a research center and future research, development, and deployment activities rather than describing any realized harm or incidents caused by AI systems. There is no mention of any injury, disruption, rights violation, or other harm caused by AI systems. The article is about ongoing and planned research and infrastructure development, which could plausibly lead to future AI incidents but does not describe any current harm or hazard event. Therefore, it is best classified as Complementary Information, providing context and updates on AI ecosystem developments and governance efforts related to autonomous driving in Beijing.
Thumbnail Image

IHS Markit:中国自动驾驶市场和未来出行市场展望

2021-08-18
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous driving technologies and their market development, which fits the definition of AI systems. However, it does not report any realized harm, violation, or disruption caused by these AI systems, nor does it indicate a credible imminent risk of such harm. The content is forward-looking and descriptive of industry and policy developments, which aligns with the definition of Complementary Information. It enhances understanding of the AI ecosystem without reporting an incident or hazard.
Thumbnail Image

半月两起事故!31岁企业家驾蔚来身亡!"自动驾驶"最大祸因?

2021-08-16
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an advanced driver-assistance system with AI capabilities) whose use directly led to a fatal accident, fulfilling the criteria for an AI Incident. The harm is realized (death of the driver), and the AI system's malfunction or limitations contributed to the incident. The article also discusses the broader context of AI-assisted driving safety and regulatory responses, but the primary focus is the fatal accident caused by the AI system's use. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

自动驾驶的焦虑与内卷 以"命"营销?

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an automated driving assistance system (NOP), which was active at the time of a fatal crash. The system's failure to detect a static obstacle (a stationary engineering vehicle) directly led to the death of the driver. This is a clear case of harm to a person caused by the malfunction or limitations of an AI system. The article also discusses the misuse or overreliance on the AI system due to misleading marketing, which indirectly contributed to the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction directly led to injury and death.
Thumbnail Image

始终是辅助 盘点带有自动驾驶辅助功能的皮卡

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems (Level 2 autonomous driving assistance) and their use in pickup trucks, which qualifies as AI system involvement. However, it does not describe any actual harm or incident resulting from these systems, nor does it indicate a credible imminent risk or hazard. The focus is on informing readers about the technology and cautioning about its limitations, which aligns with providing complementary information rather than reporting an incident or hazard. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

蔚来"自动驾驶门"发酵!李想、周鸿祎、沈晖接连发声,"L2到L5都是行业黑话"

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a fatal accident caused while the autonomous driving feature was active, indicating the AI system's malfunction or failure contributed to the harm. The autonomous driving system is an AI system as it makes driving decisions. The harm is realized (death), fulfilling the criteria for an AI Incident. The subsequent regulatory measures and industry discussions are complementary but do not negate the incident classification. Hence, the primary classification is AI Incident.
Thumbnail Image

超500名蔚来车主发布联合声明:蔚来辅助驾驶系统没有误导我们

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (NIO's driver-assist system) and discusses a fatal accident related to its use. However, the main focus is on the users' collective statement clarifying that they were not misled about the system's capabilities and that the system is an assistive technology requiring driver attention. There is no new incident of harm caused by the AI system itself reported here, nor is there a new hazard identified. The event is a societal response and clarification following a prior incident, thus it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

车主发布NOP系统认知联合声明 蔚来回应:与官方无关

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP assisted driving system) whose use directly preceded a fatal traffic accident causing the death of the driver. This meets the definition of an AI Incident because the AI system's use has directly led to harm to a person. Although the investigation is ongoing and the exact role of the AI system is not fully established, the connection between the AI system's use and the fatal harm is sufficiently clear. The public controversy and statements from owners do not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

蔚来车祸致死,李想高调点评!蔚来前公关总监犀利嘲讽

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the NIO ES8's autonomous driving feature) whose operation directly led to a fatal accident, causing harm to a person. This fits the definition of an AI Incident because the AI system's use directly resulted in injury or harm to a person. The discussion about terminology and industry responses is secondary and does not detract from the primary classification as an AI Incident.
Thumbnail Image

自动驾驶3113公里!林文钦生前行驶数据曝光,好友:他很信任蔚来

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving a vehicle equipped with an AI-based driving assistance system (NIO Pilot's NOP). The driver relied heavily on this AI system, which is designed to assist or partially automate driving tasks. The fatality constitutes injury or harm to a person caused directly or indirectly by the AI system's use. The dispute over the system's capabilities and the company's marketing claims further highlight issues related to the AI system's deployment and user expectations. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in a real-world scenario.
Thumbnail Image

莆田交警通报蔚来自动驾驶事件:已开展调查,将依法责任认定

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an autonomous driving function, which is an AI system, in the vehicle involved in the accident. The accident has already occurred, indicating realized harm. The police are investigating the incident to determine responsibility, which confirms the event is an AI Incident due to the direct involvement of an AI system in causing harm. There is no indication that the article is merely about updates or responses to a past incident without new harm, nor is it about potential future harm without realized damage. Therefore, the classification is AI Incident.
Thumbnail Image

蔚来前公关总监一冉发文,指责某新势力老板幸灾乐祸

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references accidents caused by the use of autonomous driving systems, which are AI systems. These accidents represent realized harm (injury or risk to health) linked directly to the malfunction or limitations of the AI system. The discussion about misleading marketing and calls for clearer terminology further underline the role of AI system use and its impact. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm and safety incidents.
Thumbnail Image

"自动驾驶"的致命诱惑 车厂车主太乐观了

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver assistance system (NOP) that was active at the time of a fatal crash. The system's malfunction or limitations in detecting obstacles (e.g., failure to recognize road cones and a slow-moving maintenance vehicle) directly contributed to the accident and death, fulfilling the harm criterion (a) injury or harm to a person. The discussion about misleading marketing and user overreliance further supports the AI system's role in the incident. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

业内人士呼吁:自动驾驶试错别让消费者买单 应限制功能开启

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of assisted and autonomous driving technologies. It reports on actual fatal accidents linked to these AI systems, indicating direct harm to human health and safety. The discussion about the responsibility of AI systems in accidents and the call for regulatory restrictions on enabling such features further supports the classification as an AI Incident. The harms are realized (fatal accidents), not just potential, and the AI systems' malfunction or misuse is a contributing factor. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

冲上热搜!蔚来车主联合声明,网友震惊:对蔚来车主有了新的认识

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NP/NOP driver assistance system) and references a fatal accident linked to its use. However, the article focuses on the owners' collective statement clarifying the system's capabilities and urging accurate media reporting. There is no new harm or new risk described; the harm (the fatal accident) is prior and not the main focus here. The event is primarily about social response and perception management, which fits the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

蔚来回应"自动驾驶"事故:正配合调查,NOP不等同于自动驾驶

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an assisted driving AI feature) whose use directly led to a fatal traffic accident, causing harm to a person. This fits the definition of an AI Incident because the AI system's use is directly linked to injury or harm to a person. Although the investigation is ongoing, the reported fatality and the involvement of the AI-assisted driving system meet the criteria for an AI Incident.
Thumbnail Image

警方通报蔚来自动驾驶事件:林某某追尾货车后当场死亡,将依法责任认定

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system—the NOP navigation assist (an autonomous driving feature)—at the time of the fatal crash. The harm (death and injury) has occurred and is directly connected to the use of this AI system. The police investigation and responsibility determination further confirm the incident's seriousness. Hence, this is an AI Incident as per the definitions, involving AI system use leading directly to harm to persons.
Thumbnail Image

各路人物表态"请对自动驾驶心存敬畏"

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (assisted autonomous driving) whose use directly resulted in a fatal traffic accident, fulfilling the criteria for an AI Incident due to harm to a person. The article also discusses responsibility and safety concerns related to AI systems in vehicles, but the primary focus is on the realized harm caused by the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

业内人士呼吁:自动驾驶试错别让消费者买单

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous and assisted driving, describing real accidents and safety issues caused by their use. The harms include injury and death to consumers, which directly fall under the definition of AI Incident (harm to health of persons). The discussion about regulatory responses and calls for stricter controls further supports the classification as an incident rather than a mere hazard or complementary information. The AI systems' malfunction or misuse in assisted driving has already led to harm, fulfilling the criteria for AI Incident.
Thumbnail Image

"自动驾驶"是让我们躺平开车吗?

2021-08-17
网易车讯
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of advanced driver assistance systems (e.g., Navigate on Pilot) that use AI algorithms for vehicle control. While it reports an accident involving such a system, the cause is still under investigation and no direct causal link to AI malfunction or misuse is confirmed. The article mainly focuses on the risks of misunderstanding and overreliance on these AI-based systems, which could plausibly lead to accidents or harm in the future. It also discusses the need for clearer communication and user education to prevent such harms. Since no confirmed harm directly caused by AI is established, and the main focus is on potential risks and the need for caution, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

蔚来回应"萌剑客"车祸:NOP 领航辅助不是自动驾驶,正在调查

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based driver assistance system that influences vehicle control and decision-making. The accident causing a fatality while the system was active indicates direct or indirect harm caused by the AI system's use or malfunction. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to a person. The event is not merely a potential hazard or complementary information but a realized harm involving an AI system.
Thumbnail Image

行车数据显示:林文钦七月超一半车程使用蔚来汽车的 NOP 功能

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident where the driver was using an AI-based navigation assistance system (NOP) during the incident. The system's involvement is explicit, and the harm (death) is realized. Although the exact cause is under investigation, the AI system's role is pivotal as it was active during the accident. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to a person.
Thumbnail Image

交警独家回应蔚来"自动驾驶门":发生事故的是平原路段 相对比较平直

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO ES8's assisted driving system) whose use directly led to a fatal accident, fulfilling the criteria for an AI Incident. The AI system's inability to detect a stationary obstacle and the resulting collision caused harm to a person (death), which is a primary harm category. The article also discusses the system's technical limitations and user misunderstanding, reinforcing the AI system's pivotal role in the incident. Although the system is classified as L2 assisted driving, it still qualifies as an AI system under the definitions provided. Hence, this is not merely a hazard or complementary information but a confirmed AI Incident.
Thumbnail Image

"蔚来车主"发布联合声明:蔚来没有误导我们

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a car accident involving a vehicle with an AI-based assisted driving system activated, which led to harm (the crash). The police are investigating, and the car owners clarify their understanding of the system's capabilities, indicating awareness of the AI system's nature. The AI system's use is directly involved in the incident, and harm has occurred. Even though the investigation is ongoing and causality is not fully established, the event meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm (a car accident).
Thumbnail Image

交警回应31岁企业家驾驶蔚来车祸身亡:事故路段相对比较平直

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The vehicle's automatic driving feature is an AI system that was in use at the time of the accident. The fatal crash directly caused harm to a person, fulfilling the criteria for an AI Incident. The police investigation and statements provide context but do not negate the AI system's involvement in the harm. Hence, this is classified as an AI Incident due to the direct link between the AI system's use and the fatal harm.
Thumbnail Image

辅助驾驶惹祸: 过度宣传须停止 监管待加强

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot assisted driving system, which was active at the time of the fatal crash. The harm (death of the driver) directly resulted from the use of this AI system. The article also discusses systemic issues such as overhyped marketing and lack of proper user education and regulatory frameworks, which contributed indirectly to the incident. Given the direct causal link between the AI system's use and the fatal harm, this qualifies as an AI Incident rather than a hazard or complementary information. The presence of multiple similar accidents involving AI-assisted driving further supports this classification.
Thumbnail Image

蔚来事故门蒸发百亿市值,资本扎堆的自动驾驶市场泡沫破了?

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a fatal accident involving a NIO vehicle using its autonomous driving assistance system (NOP). Autonomous driving systems are AI systems as they infer from sensor inputs to control vehicle behavior. The death of a person in this accident is a direct harm to health caused by the use of the AI system. The article also discusses the market impact and broader industry context, but the primary focus is the accident and its consequences. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

吹上天的"自动驾驶",终于用人命踩下了刹车

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (NOP autonomous driving feature) whose use directly led to fatal accidents, causing injury and death (harm to persons). The AI system's malfunction or limitations, combined with user misunderstanding and overreliance, contributed to these harms. This fits the definition of an AI Incident because the AI system's use directly or indirectly caused harm to people. The article also references regulatory measures responding to these incidents, but the primary focus is on the realized harm from the AI system's use, not just potential or complementary information.
Thumbnail Image

一个真敢卖,一个真敢买!网上热卖的自动驾驶辅助器,安全隐患多大?

2021-08-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely automatic driving assistance systems that rely on sensor data and AI algorithms to monitor driver engagement and control the vehicle. The misuse of these systems through aftermarket devices that spoof sensor inputs has directly contributed to fatal accidents, constituting injury and harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction (due to spoofing) have directly led to harm. The article also discusses the broader safety risks and regulatory responses, but the primary focus is on realized harm caused by AI system misuse.
Thumbnail Image

美国调查特斯拉 国内"蔚小理"股价大跌

2021-08-18
网易车讯
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of Tesla's Autopilot and NIO's Navigate on Pilot, which are advanced driver-assistance systems relying on AI for autonomous or semi-autonomous driving functions. The fatal crash involving NIO's system and multiple crashes involving Tesla's system demonstrate direct harm to human life and safety caused by the use or malfunction of these AI systems. The official investigation by NHTSA further confirms the recognition of these harms. The economic impact on stock prices of these companies is a secondary consequence of the AI-related safety incidents. Given the direct link between AI system use/malfunction and realized harm, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家开蔚来自动驾驶出车祸去世!家属质疑迟迟拿不到车辆数据

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system, including the NOP feature, is an AI system providing advanced driver assistance with autonomous navigation capabilities on highways. The driver had activated this system when the vehicle collided with a road construction vehicle, resulting in fatal injuries. The AI system's involvement is explicit and central to the event. The harm (death) is direct and materialized. The article also discusses the manufacturer's response and data handling, which are relevant but secondary to the primary harm caused. Hence, this event meets the criteria for an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

蔚来车主车祸去世 别让自动驾驶成马路杀手

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (automatic driving assistance) whose use directly led to a fatal accident, causing harm to a person. The AI system's failure to detect a stationary vehicle and the resulting collision is a direct cause of the harm. The article also references prior similar incidents and systemic safety concerns, confirming the presence of realized harm due to AI system malfunction or limitations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来自动辅助驾驶致死,车主购车推荐人发声:极度自责

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the NIO Pilot automatic driving assistance system, which was active during the fatal crash. The harm is direct and severe: a person died and vehicles were damaged. The AI system's role is central to the incident, as the accident occurred while the AI-assisted driving feature was engaged. Although the investigation is ongoing, the presence and use of the AI system at the time of the accident and the resulting fatality meet the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused or contributed to by an AI system.
Thumbnail Image

蔚来营销时叫自动驾驶,撞死人就改叫辅助驾驶?

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot, an advanced driver assistance system with AI capabilities) whose use directly led to a fatal car accident, causing harm to a person. The article details the system's role in the accident, the company's marketing practices that may mislead users about the system's capabilities, and the resulting death. This fits the definition of an AI Incident, as the AI system's malfunction or limitations directly caused injury or harm to a person. The article also references similar past incidents, reinforcing the pattern of harm linked to the AI system's use.
Thumbnail Image

"狂妄"李斌,蔚来已无敌手?

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's NOP autonomous driving system during fatal accidents, indicating the AI system's involvement in causing harm (death of a driver). This meets the definition of an AI Incident, as the AI system's use has directly led to injury or harm to a person. The article also references previous similar incidents and ongoing investigations, reinforcing the direct link between the AI system's use and realized harm.
Thumbnail Image

31岁创始人驾驶蔚来启用NOP遇难,蔚来:没有任何删改数据的行为

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The NIO ES8's NOP system is an AI-based autonomous driving system. The accident occurred while the AI system was engaged, leading directly to a fatality. This constitutes an AI Incident because the AI system's use directly resulted in injury and death. The company's statements about data handling and cooperation with authorities provide context but do not negate the incident classification.
Thumbnail Image

L2.999999?!"自动驾驶"的内卷杯具

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance systems (ADAS) with autonomous driving features. It documents real fatal accidents where these AI systems' limitations and malfunctions contributed to harm (death and injury). The misuse or misunderstanding by users, combined with overhyped marketing, also indirectly caused harm. This fits the definition of an AI Incident because the AI system's use and malfunction have directly or indirectly led to injury or harm to persons. The article also discusses regulatory and industry responses, but the primary focus is on the realized harms caused by these AI systems, not just potential hazards or complementary information.
Thumbnail Image

这也太狂了!蔚来员工私自接触涉案车辆,被交警传唤调查

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The AI system (NOP automated driving assistance) was active during the fatal accident, indicating its involvement in the incident that caused harm to a person (the driver's death). The unauthorized access and potential tampering with vehicle data by the company's employee further implicate the AI system's role in legal and ethical violations. The misleading marketing about the system's capabilities also indirectly contributes to harm by potentially causing users to over-rely on the system, leading to accidents. These factors combined meet the criteria for an AI Incident as the AI system's use and misuse have directly and indirectly led to harm and legal violations.
Thumbnail Image

私下接触事故车员工被传唤?蔚来官方否认

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system is an AI-based driver assistance system that was active during the accident, and the accident resulted in a fatality, which is a direct harm to a person. The article discusses the use and malfunction (or failure) of this AI system as a contributing factor to the incident. The ongoing investigation and data extraction relate to the AI system's role in the accident. Hence, the event meets the criteria for an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

家属方"言之凿凿"蔚来是否接触事故车成争议焦点

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the NIO Pilot assisted driving system—being active during a fatal accident. The harm (death of the driver) has occurred, and the AI system's involvement is under scrutiny as a possible cause or contributing factor. The family's claim about unauthorized access to the accident vehicle by NIO staff also relates to the investigation of the AI system's role. Given the direct link between the AI system's use and the fatal harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来车主好友透露将对车辆进行鉴定及提取数据

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
The NOP feature is an AI system providing autonomous driving capabilities. The fatal crash caused by its use constitutes direct harm to a person, fulfilling the criteria for an AI Incident. The ongoing investigation and data extraction are part of the incident's response, but the core event is the fatal accident linked to the AI system's use. The possible data tampering allegation further underscores the AI system's central role in the harm and legal implications. Therefore, this event is classified as an AI Incident.
Thumbnail Image

企业家驾驶蔚来车祸身亡:什么是领航辅助?车企宣传模糊?

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP driver assistance system, which was active during the fatal crash. The harm includes the death of the driver and significant property damage. The AI system's failure to detect and respond to obstacles on the highway directly contributed to the incident. The article also discusses the broader context of AI-assisted driving systems, their limitations, and regulatory responses, but the core event is a realized harm caused by AI system use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家启动自动驾驶,因车祸逝世,人类能相信自动驾驶吗?

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving feature (NOP) that was active during the fatal accident. The harm (death of the driver) directly resulted from the use of this AI system. The article also highlights the current technological limitations and the need for driver vigilance, which further confirms the AI system's involvement in the incident. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

31岁企业家开蔚来车出事故致死,"自动驾驶"该不该背锅?

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP automated driving assistance system, which is a Level 2 autonomous driving AI system. The system's use during the accident and its failure to detect a stationary highway maintenance vehicle directly contributed to the fatal crash. This constitutes direct harm to a person caused by the use and malfunction of an AI system. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's malfunction and use have directly led to injury and death.
Thumbnail Image

31岁企业家开蔚来车出事故 自动驾驶该不该背锅?

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP automated driving assistance system (Level 2 autonomous driving) active during the accident. The system's failure to detect and respond appropriately to a stationary highway maintenance vehicle directly led to a fatal crash, causing injury and death. This meets the definition of an AI Incident because the AI system's malfunction directly caused harm to a person. The article also discusses the legal and technical responsibility boundaries, but the key point is that the AI system's limitations and use were pivotal in the incident.
Thumbnail Image

31岁蔚来车主追尾身亡!祸在自动驾驶,罪在过度宣传

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving feature (NOP navigation) in a NIO vehicle. The fatal crash is directly linked to the use of this AI system, which failed to prevent a rear-end collision. The harm is a fatal injury to a person, which fits the definition of an AI Incident (harm to health of a person). The article also discusses the overpromotion and misunderstanding of the AI system's capabilities, but the core event is the fatal accident caused by the AI system's malfunction or misuse. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故的台前幕后:对未成熟的致命诱惑 车厂和车主都太乐观了

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot's Navigate on Pilot (NOP) assisted driving feature, which uses AI technologies such as sensor fusion, radar, and camera data to perform autonomous driving tasks under certain conditions. The system's malfunction or limitations directly contributed to a fatal crash, causing harm to a person (the driver). The article details the use of the AI system, the failure to detect obstacles, and the resulting death, fulfilling the criteria for an AI Incident. The controversy over manufacturer responsibility, data transparency, and user understanding further supports the classification as an incident rather than a hazard or complementary information. The harm is realized and directly linked to the AI system's use and malfunction.
Thumbnail Image

律师发声:蔚来工作人员私自接触事故车不合法。蔚来汽车发布声明

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the automated driving assistance system in the NIO ES8 and other electric vehicles. The fatal accident is directly linked to the use of this AI system, and the alleged unauthorized tampering with vehicle data by NIO staff could constitute a violation of legal rights and obstruct justice. Additionally, the article highlights dangerous misuse of ADAS by drivers, which has already resulted in harm (death) and ongoing safety risks. These factors meet the criteria for an AI Incident, as the AI system's use and misuse have directly or indirectly led to harm to a person and potential legal violations.
Thumbnail Image

蔚来否认篡改数据,李想、周鸿祎、沈晖接连发声

2021-08-17
网易车讯
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving feature NOP) that was in use during a fatal accident, which is a harm event. However, the article mainly focuses on the company's denial of data tampering, clarifications about the technology, expert opinions on terminology and user education, and regulatory measures. There is no new AI Incident reported beyond the known accident, nor is there a new AI Hazard described. The article serves primarily as complementary information that contextualizes the incident, discusses the challenges of AI-assisted driving, and highlights ongoing governance and user education efforts. Therefore, the classification is Complementary Information.
Thumbnail Image

技术人员私自接触涉案车辆?蔚来回应了

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based driver assistance system that was active during the accident, which led to a fatality, fulfilling the harm criterion. The system's inability to detect static obstacles and the requirement for user oversight are relevant to the cause of the accident. Additionally, the alleged unauthorized tampering with vehicle data by technical personnel could impact the investigation and liability, further linking the AI system's use and management to the incident. Hence, the event meets the definition of an AI Incident as the AI system's use and potential misuse have directly led to harm and legal scrutiny.
Thumbnail Image

蔚来技术人员被指私自接触车祸涉事车辆!客服称将了解情况

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event centers on a fatal car accident involving a vehicle with autonomous driving features (an AI system). The article discusses the investigation and the alleged unauthorized intervention by NIO technical staff on the accident vehicle, which could affect data critical for understanding the AI system's role in the accident. However, there is no direct evidence that the AI system malfunctioned or caused harm, nor that the data tampering has been confirmed. The article mainly updates on the ongoing investigation and company responses, fitting the definition of Complementary Information rather than a new AI Incident or Hazard.
Thumbnail Image

警方传唤蔚来人员 车辆数据若被篡改 蔚来公司负全责!

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The autonomous driving system qualifies as an AI system because it infers from input data to control vehicle behavior. The accident caused injury (death) to a person, fulfilling the harm criterion for an AI Incident. The investigation into potential data tampering by the company's technical staff further indicates the AI system's role in the incident. Therefore, this event is classified as an AI Incident due to the direct harm caused by the use of an AI system in autonomous driving leading to a fatal accident.
Thumbnail Image

理想汽车创始人李想呼吁媒体与机构:统一自动驾驶中文名词标准,避免误解

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear from the description of the NOP feature, which is an AI-enabled automatic driving assistance system. The fatal accident occurred while the system was in use, directly causing harm to a person. The founder's call for standardized terminology indicates that misunderstanding of AI system capabilities may have contributed to misuse or overreliance, which is an indirect factor in the harm. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and the associated risks of misunderstanding its capabilities.
Thumbnail Image

蔚来强调"领航辅助"不是自动驾驶 汽车专家:目前并无真正意义的自动驾驶 过度宣传会误导消费者

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NIO's Navigate on Pilot, an AI-assisted driving system) whose use directly led to a fatal traffic accident, causing harm to a person. This meets the definition of an AI Incident because the AI system's use was a contributing factor in the harm. The article also highlights the risk of consumer misunderstanding and overreliance on such systems, but the realized harm (death) is the key factor. The presence of regulatory guidance and expert commentary does not change the classification, as they serve as complementary information. Hence, the event is classified as an AI Incident.
Thumbnail Image

一特斯拉车主在夜间高速行驶期间,开启自动辅助驾驶后打王者荣耀

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The Tesla driver’s behavior of playing a game while the AI system controls the vehicle represents misuse of an AI system designed for driving assistance, creating a direct risk of harm. The referenced fatal accident involving a NIO vehicle using a similar AI system confirms that such systems have caused actual harm. The AI systems are explicitly involved in vehicle control, and their malfunction or misuse has led to injury or death, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but involves actual harm or risk realized through AI system use.
Thumbnail Image

蔚来回应"技术人员私自接触涉案车辆"传闻:公司没有删改任何数据的行为

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle equipped with an AI-based assisted driving system (NIO Pilot/NOP). The driver was using this system at the time of the crash, which directly led to harm (death and property damage). Additionally, the report includes allegations of unauthorized data access and possible tampering by the company's technical staff, which could constitute a breach of legal obligations and impact the investigation. The AI system's development, use, and potential malfunction are central to the incident and its consequences. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm and legal concerns.
Thumbnail Image

蔚来回应"8月12日福建交通事故调查":没有任何删改数据的行为

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (NIO's vehicle systems) because the data extraction and safety operations relate to AI-enabled vehicle functions. However, there is no indication that the AI system caused or contributed to harm, nor is there a plausible risk of harm described. The company's statement denies data tampering and confirms cooperation, which is a response to an ongoing investigation rather than a new incident or hazard. Therefore, this is Complementary Information providing context and updates on an existing situation without introducing new harm or risk.
Thumbnail Image

31岁企业家命丧蔚来"自动驾驶"?业内提醒:辅助驾驶勿依赖

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP) that was active at the time of a fatal traffic accident. The system's limitations and the driver's overreliance on it are identified as contributing factors to the harm (death). This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also discusses the broader context of AI-assisted driving risks and regulatory responses, but the primary focus is the fatal incident linked to the AI system's use, not just potential future harm or complementary information.
Thumbnail Image

蔚来:公司无任何删改数据行为 也无员工被警方传唤

2021-08-16
网易车讯
Why's our monitor labelling this an incident or hazard?
While the vehicle involved is presumably equipped with AI systems, the article does not state that the AI system caused or contributed to the accident or any harm. The company's statement denies data tampering and police involvement, and the investigation is ongoing. There is no indication of realized or potential harm directly linked to AI system malfunction or misuse. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on an ongoing investigation involving an AI-equipped system without new harm or risk disclosed.
Thumbnail Image

丑闻不断,销量跌落神坛,蔚来汽车怎么了?

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of NIO's AI-based driver assistance system (NOP/NAD) in multiple traffic accidents, including a fatal crash. The AI system's failure to detect obstacles and the resulting collisions constitute direct harm to persons and property. The misleading marketing of the system as 'automatic driving' when it is only an assistance system also contributes to consumer harm. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly led to injury and death, as well as reputational and economic harm to the company and its customers.
Thumbnail Image

技术人员私自接触涉案车辆被传唤,蔚来回应:正内部了解情况

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle equipped with an AI-assisted driving system (NOP). The AI system was active during the accident, which directly led to harm (death). Additionally, the unauthorized access and possible tampering of the vehicle's data by technical personnel could obstruct justice and accountability, constituting a violation of legal and ethical standards. The AI system's development, use, and data management are all implicated. Hence, this is an AI Incident as it involves realized harm linked to the AI system's use and subsequent data handling issues.
Thumbnail Image

5月24日,据媒体报道在北京建国门内大街,一辆蔚来ES8发生车祸,车辆将道路中央隔离带撞毁,车辆前部损毁严重,幸运的是没有造成人员伤亡。

2021-08-14
证券之星
Why's our monitor labelling this an incident or hazard?
The NIO ES8's autonomous driving features qualify as AI systems because they perform real-time decision-making and control of the vehicle. The accidents described are directly linked to the use and malfunction of these AI systems, resulting in serious harm including death and injuries. Therefore, these events meet the criteria for AI Incidents as the AI system's malfunction or use has directly led to harm to persons and property.
Thumbnail Image

遇难企业家行车数据曝光!蔚来高管被扒,高速上开启自动驾驶吃饭

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot's Navigate on Pilot (NOP) feature, which is an AI-based assisted driving system. The fatal accident occurred while this system was engaged, and the system's inability to detect static obstacles like road barriers contributed to the collision. The harm (death of the driver) is directly linked to the AI system's use and its limitations. The article also discusses systemic issues such as insufficient user education and warnings about the system's capabilities, reinforcing the AI system's role in the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

客户试驾小鹏G3,销售展示自动驾驶追尾前方车辆,安全气囊弹出

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the XPILOT 2.5 automatic driving assistance system with ACC functionality, which was in use during the test drive. The system's failure to brake or maintain safe distance directly caused a collision, resulting in property damage and airbag deployment. This meets the definition of an AI Incident because the AI system's malfunction during use directly led to harm (property damage and potential injury risk). The event is not merely a potential hazard or complementary information but a realized incident involving AI malfunction and harm.
Thumbnail Image

最新进展:蔚来技术人员私自接触林文钦车祸事故车被警方传唤

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot's NOP navigation assistance, an advanced driver assistance system with AI components for autonomous driving functions. The fatal accident directly resulted from the use of this AI system, constituting harm to a person. The subsequent police investigation into potential data tampering by the manufacturer's technical staff further underscores the AI system's central role in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and possible malfunction or misuse have directly led to a fatality, fulfilling the criteria for injury or harm to a person.
Thumbnail Image

私下接触事故车员工被传唤?蔚来官方否认,家属与蔚来各执一词

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident where the driver was using an AI-based advanced driver assistance system (NOP). The AI system's involvement is explicit and directly related to the harm (death). The dispute about data handling does not negate the fact that the AI system's use was a contributing factor to the incident. Therefore, this qualifies as an AI Incident due to direct harm caused during the use of an AI system.
Thumbnail Image

周鸿祎回应李想关于自动驾驶用词倡议 称车企不应过度营销

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The article centers on the use and marketing of AI-based autonomous driving systems and the potential for user misunderstanding due to exaggerated claims. However, it does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunction or misuse led to injury, rights violations, or other harms. Instead, it is a discussion and advocacy for clearer terminology and responsible communication about AI capabilities in vehicles, which is a governance and societal response to AI-related issues. Therefore, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

八个月三起事故 沈海高速成蔚来"魔咒"?

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the NIO NOP assisted driving system—whose use and malfunction have directly caused fatal and serious accidents. The harm includes injury and death to persons, which is a primary category of AI Incident harm. The article discusses the system's failure to detect obstacles and the resulting crashes, indicating direct causation. Although investigations are ongoing, the evidence and multiple incidents strongly support classification as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in causing harm is clear and direct, meeting the definition of an AI Incident.
Thumbnail Image

家属质疑篡改数据!蔚来派技术员私自接触事故车,或涉嫌刑事犯罪

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of an AI system (NIO's NOP autonomous driving feature) that malfunctioned by failing to detect a slow-moving vehicle, resulting in a fatal crash. This meets the criteria for an AI Incident as the AI system's malfunction directly led to injury and death (harm to a person). The subsequent unauthorized access and possible tampering with vehicle data by the manufacturer's technical personnel further complicate the incident and raise legal concerns, reinforcing the classification as an AI Incident. The presence of the AI system, the direct link to harm, and the legal implications confirm this classification.
Thumbnail Image

警方传唤蔚来人员,因其私自接触事故车辆

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The incident involves AI systems embedded in the NIO vehicle, and the unauthorized access to the vehicle's data by company personnel is a misuse of the AI system's data handling capabilities. While no direct harm from the AI system malfunction or misuse is reported, the potential tampering with data related to a fatal accident is a serious legal and ethical issue. Since no new harm has been caused by the AI system's operation itself, but there is a plausible risk of harm to justice and accountability, this event is best classified as Complementary Information, providing an update on the investigation and potential legal consequences related to AI system data handling.
Thumbnail Image

林文钦购买蔚来汽车的推荐人:现在感到非常悲痛,极度自责

2021-08-16
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the autonomous driving feature of the NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person, which fits the definition of an AI Incident. The difficulties in extracting driving data and the investigation further confirm the AI system's involvement in the harm. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

蔚来自动辅助驾驶事故致人死亡!此前蔚来副总演示车内吃饭

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's automatic driving assistance) whose use directly led to a fatal accident, causing harm to a person. The system is described as an AI-based driver assistance feature requiring driver supervision, but the accident occurred while it was engaged. The death of the driver is a clear injury/harm to a person, fulfilling the criteria for an AI Incident. The article also highlights misuse and misunderstanding of the system's capabilities, reinforcing the link between the AI system's use and the harm. Hence, this is classified as an AI Incident.
Thumbnail Image

31岁企业家"自动驾驶"车祸去世,曾创立多个知名品牌!蔚来回应:正在调查

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO Pilot (NOP) automated driving assistance system, which was active during the fatal crash. The harm (death of the driver) directly resulted from the use of this AI system, meeting the definition of an AI Incident. The article details the accident circumstances, the AI system's role, and the ongoing investigation, confirming the AI system's involvement in causing harm. Therefore, this is not merely a potential hazard or complementary information but a concrete incident where AI use led to fatal harm.
Thumbnail Image

蔚来员工私自接触涉案车辆,官方称:这是合理断电?

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the driving assistance system) whose use is linked to a fatal accident, constituting harm to a person. The unauthorized access and possible tampering with vehicle data by company staff could impact the investigation and accountability, indicating misuse or mishandling related to the AI system. Therefore, this qualifies as an AI Incident because the AI system's use and subsequent handling have directly or indirectly led to harm and legal concerns.
Thumbnail Image

每日易乐:互不相让 本田硬怼劳斯莱斯

2021-08-17
网易车讯
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in vehicles (NOP by NIO, Tesla's autopilot) being active during accidents that resulted in death and injuries, which fits the definition of AI Incidents as the AI system's use directly or indirectly led to harm to persons. The NHTSA investigation confirms the seriousness and official recognition of these harms. The other content about traffic disputes and legal cases does not involve AI systems or AI-related harm. Hence, the overall classification is AI Incident based on the described fatal and injury-causing accidents involving AI driving assistance systems.
Thumbnail Image

蔚来回应连锁品牌创始人车祸逝世:李斌表示哀悼!

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the automatic driving assistance (NOP) feature in the NIO ES8 vehicle. The use of this AI system directly led to a fatal car accident, causing injury and death, which fits the definition of an AI Incident. The involvement of the AI system is clear, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident.
Thumbnail Image

多方呼吁:统一自动驾驶名词中文标准

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the NOP feature is an AI-enabled driver assistance system. The accident resulting in a fatality is directly linked to the use of this AI system, constituting harm to a person. The article also discusses the misuse or misunderstanding of AI system capabilities leading to harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use and the broader implications of miscommunication about AI capabilities leading to safety risks.
Thumbnail Image

杀人的不是辅助驾驶,是司机自己

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's assisted driving, a Level 2 AI system) whose use during driving directly contributed to a fatal traffic accident. The harm (death of the driver) has occurred, fulfilling the criteria for an AI Incident. The AI system's involvement is through its use (not malfunction), and the accident is linked to overreliance or misuse of the assisted driving system. The article also references regulatory responses and industry implications, but these are secondary to the primary event of the fatal accident. Hence, the classification is AI Incident.
Thumbnail Image

北云科技推出贴片式组合导航模块,智能汽车量产再加速!

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the combined navigation module with AI-based algorithms and self-developed chips) used in autonomous driving, which fits the definition of an AI system. However, the article does not describe any injury, disruption, rights violation, property/community/environmental harm, or other significant harms caused by the system. Nor does it suggest any credible risk of such harms occurring in the future. The content mainly details the technological innovation, performance, integration, and market adoption of the AI system, which aligns with the definition of Complementary Information. There is no indication of malfunction, misuse, or potential hazard. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

31岁企业家驾驶蔚来汽车,启动自动驾驶功能后撞车去世 好友:正提取驾驶数据

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the automatic driving feature (NOP pilot state) of the NIO vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person, which fits the definition of an AI Incident. The involvement of the AI system is not speculative; it was active at the time of the crash, and the harm (death) has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

蔚来致命车祸行驶数据浮出水面,没有出现急减速

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot with NOP autonomous driving functionality. The fatal accident occurred while the AI system was engaged, and the system's failure to detect static obstacles is a plausible contributing factor to the crash. This directly led to harm (death of the driver), fulfilling the criteria for an AI Incident. The article also references regulatory responses and safety management, but the primary focus is the fatal accident linked to AI system use.
Thumbnail Image

2018年来连出11起事故,美国彻查Autopilot,特斯拉AI日尴尬了

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Tesla's Autopilot, an AI system for autonomous driving, and details at least 11 accidents since 2018 caused by this system, resulting in one death and multiple injuries. The involvement of the AI system in these accidents is direct and causal. The ongoing regulatory investigation and potential recalls further confirm the significance of the harm caused. Hence, this is an AI Incident as per the definitions, since the AI system's use has directly led to injury and death.
Thumbnail Image

31岁知名企业家林文钦因车祸离世 好友:事发时驾驶蔚来ES8,启用领航辅助功能

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the NIO Pilot automatic driving assistance system—being used at the time of a fatal car crash. The system's activation is a key factor in the incident, as the driver was using the AI-enabled navigation assist when the collision occurred. The harm (death of the driver) is directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The ongoing investigation does not negate the realized harm and the AI system's involvement. Therefore, this event is classified as an AI Incident.
Thumbnail Image

31岁企业家疑因"自动驾驶"致交通事故去世!蔚来:正在调查

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the assisted driving feature (NOP) of the NIO ES8 vehicle. The driver activated this AI system, which directly or indirectly contributed to a fatal traffic accident causing death, fulfilling the harm criteria (a) injury or harm to a person. The article confirms the AI system's involvement in the incident and the resulting harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

31岁企业家车祸去世 蔚来回应:领航辅助不是自动驾驶

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the Navigate on Pilot (NOP) driver assistance feature, which is an AI-based system assisting driving. The fatal accident occurred while this system was active, directly leading to the death of the driver, fulfilling the criterion of harm to a person caused directly or indirectly by the AI system's use. The company's response and ongoing investigations further confirm the AI system's involvement. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

又出事!31岁蔚来车主自动驾驶车祸去世,曾创多个知名品牌!

2021-08-14
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO Pilot automatic driving system engaged during the accident. The fatal crash caused the death of the driver, which is a direct harm to a person. The AI system's malfunction or limitations in its autonomous driving capabilities are implicated in the incident. This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also references ongoing investigations but confirms the harm has occurred, so it is not merely a hazard or complementary information.
Thumbnail Image

美国对特斯拉自动驾驶安全性启动调查

2021-08-17
网易车讯
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot system is an AI system designed to assist or automate driving tasks. The reported accidents, injuries, and death are directly linked to the system's failure to detect and respond appropriately to stationary emergency vehicles, which is a malfunction of the AI system. The investigation by NHTSA and recommendations by NTSB further confirm the significance of the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident as it involves realized harm caused by the use and malfunction of an AI system.
Thumbnail Image

太突然!事发沈海高速!31岁知名企业家不幸离世

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO Pilot automatic driving assistance system (NOP), which is an AI-based driver assistance technology. The use of this AI system directly preceded and contributed to a fatal traffic accident causing the death of the driver, which is a clear harm to a person. Although the exact cause is under investigation, the AI system's involvement in the vehicle's operation at the time of the accident and the resulting fatality meet the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm linked to AI system use.
Thumbnail Image

蔚来声明惊现疑点,私自接触涉案车辆只为断电,为何不能正大光明

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The vehicle involved is an AI system due to its advanced autonomous and data capabilities. The event describes the company's technical staff accessing the vehicle's systems without proper authorization, which could plausibly lead to violations of legal rights and compromise the integrity of accident data. Although no confirmed data tampering or direct harm from AI malfunction is reported, the unauthorized intervention and conflicting accounts raise credible concerns about potential future harm or legal violations. Since the harm is not confirmed but plausible, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the unauthorized access and its implications, nor is it unrelated as it directly involves an AI system and potential harm.
Thumbnail Image

网约车司机的"职业生涯"只有五年了?

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned: the NOP assisted driving system in the NIO vehicle, which is an AI-enabled driver assistance technology that takes partial control of the vehicle based on navigation data. The fatal accident occurred while this system was active, indicating the AI system's involvement in the incident. The harm is realized (death of the driver), fulfilling the criteria for injury or harm to a person. The article also notes previous accidents involving the same system on the same highway, reinforcing the link between the AI system's use and harm. Although the system is described as assisted driving rather than full autonomous driving, the AI system's malfunction or failure to prevent the accident directly led to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

又出事!31岁蔚来车主自动驾驶车祸去世,曾创多个知名品牌!

2021-08-14
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based automatic driving assistance system (NIO Pilot) during the fatal crash. The harm (death of the driver) has occurred and is directly linked to the AI system's operation in autonomous driving mode. This meets the definition of an AI Incident, as the AI system's use has directly led to injury and harm to a person. The article also references previous similar incidents and ongoing investigations, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来频陷安全风波 L2级自动驾驶暴露识别认知缺陷

2021-08-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned: NIOPilot, an L2-level automated driving assistance system using AI for perception and control. The fatal accident occurred while the AI system was active (NOP mode), and the system failed to detect a stationary obstacle, leading to a collision and death. This is a direct harm to human life caused by the AI system's malfunction or limitation. The article also discusses the broader safety concerns and technical limitations of L2 ADAS systems, confirming the AI system's pivotal role in the incident. Hence, it meets the criteria for an AI Incident as per the definitions provided.
Thumbnail Image

"萌剑客"车祸身亡,蔚来行驶数据浮出水面:没有出现急减速

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autopilot (NOP) feature of the NIO ES8 vehicle. The AI system's malfunction or failure to detect a road hazard plausibly led to the fatal crash, causing harm to a person. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly led to injury and death.
Thumbnail Image

车评网:两条人命看清蔚来汽车的公关真面目,第一时间"甩锅"

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOP driver assistance) whose use is linked to fatal accidents, causing harm to persons. The company's statements and marketing practices suggest that users may have misunderstood the system's capabilities, leading to overreliance and accidents. This is a direct or indirect contribution of the AI system to harm, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on actual harm caused, prioritizing incident classification over hazard or complementary information.
Thumbnail Image

网传连锁品牌创始人车祸身亡,称事发时使用蔚来NOP功能

2021-08-14
163.com
Why's our monitor labelling this an incident or hazard?
The vehicle's NOP function is an AI system providing automated driving assistance, including autonomous lane changes and speed adjustments. The accident happened while this AI system was in use, and the driver died, constituting direct harm caused by the AI system's use or malfunction. This fits the definition of an AI Incident because the AI system's use directly led to injury and death. The article also discusses regulatory responses, but the primary focus is the fatal accident linked to the AI system's operation, not just complementary information or potential hazards.
Thumbnail Image

痛心!知名企业家"萌剑客"车祸去世

2021-08-15
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system, specifically the NIO Navigate on Pilot (NOP) driver assistance feature, which is an AI-enabled advanced driver assistance system. The use of this AI system directly led to a fatal traffic accident causing harm to a person, fulfilling the criteria for an AI Incident. The article details the harm caused, the AI system's involvement, and ongoing investigations, which confirms the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

万亿无熊市!

2021-08-16
China Finance Online
Why's our monitor labelling this an incident or hazard?
The autonomous driving system is explicitly mentioned and was in use at the time of the accident, which directly caused the death of the individual. This constitutes an AI Incident because the AI system's use directly led to injury and death, fulfilling the criteria of harm to a person. The article also highlights the current limitations and legal framework around autonomous driving, reinforcing the AI system's role in the incident.
Thumbnail Image

31岁创始人驾驶蔚来车祸身亡:行驶数据浮出水面

2021-08-16
China Finance Online
Why's our monitor labelling this an incident or hazard?
The NIO Pilot system, including the NOP feature, is an AI system providing automated driving assistance. The accident occurred while the vehicle was operating in NOP mode, which is explicitly described as a beta-level automated driving assistance system that cannot detect static obstacles. The fatal crash directly resulted from the use of this AI system, fulfilling the criteria for an AI Incident as the AI system's malfunction or limitations led to injury (death) of a person. The article also discusses regulatory responses and safety management, but the primary event is the fatal accident caused by the AI system's use.
Thumbnail Image

林文钦亲友质疑蔚来声明:有人看到蔚来工作人员给事故车接线充电

2021-08-16
China Finance Online
Why's our monitor labelling this an incident or hazard?
The autonomous driving function is an AI system that was in use when the accident occurred, directly leading to harm (death). The event describes a fatal accident involving the AI system's operation, which qualifies as an AI Incident due to injury/harm to a person caused by the AI system's use. The dispute about post-accident vehicle handling does not negate the AI system's involvement in the harm.
Thumbnail Image

自动驾驶尚未替代人工驾驶!理想汽车创始人李想呼吁统一用词

2021-08-17
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in autonomous driving (L2 and L3 levels, Navigate on Pilot, Tesla Autopilot) and reports a fatal accident and multiple injuries linked to their use. The AI systems' malfunction or misuse has directly caused harm to people, fulfilling the criteria for an AI Incident. The discussion about terminology and consumer awareness further contextualizes the incident but does not negate the realized harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

电动车分支“自动驾驶”又摊上大事了 蔚来紧急回应或对A股相关板块形成冲击

2021-08-16
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves a Level 2 assisted driving AI system (NIO Pilot) whose use directly led to a fatal traffic accident, causing harm to a person. The AI system's malfunction or misuse is under investigation, but the harm has materialized. The article also references regulatory measures responding to such incidents, but the primary focus is the fatal accident linked to the AI system. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“领航辅助”又惹祸?蔚来再度深陷舆论风波

2021-08-15
China Finance Online
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing Level 2 assisted driving, where the driver remains responsible but the AI system assists with navigation and control. The fatal accident occurred while the system was active, and the death of the driver is a direct harm to a person. Even though the exact cause is under investigation, the AI system's use is directly linked to the incident. This meets the definition of an AI Incident because the AI system's use has directly led to injury or harm to a person. The article also mentions previous fatal accidents involving the same AI-assisted driving system, reinforcing the classification as an AI Incident.
Thumbnail Image

林文钦购车前曾多次询问“自动驾驶” 是其购买蔚来的理由之一

2021-08-17
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO vehicle's driver assistance system, which includes cameras and lidar and is marketed as 'autonomous driving' though officially a driver assistance system). The deceased was using this system when the fatal accident occurred, indicating the AI system's use directly led to harm (death). The manufacturer's clarification that it is not full autonomous driving but a driver assistance system does not negate the AI involvement. The incident is ongoing with investigations, but the harm has already occurred. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to injury (death).
Thumbnail Image

辅助驾驶不等于自动驾驶 造车新势力消费引导应去掉浮夸

2021-08-17
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (advanced driver-assistance systems with AI capabilities) whose use and possible misuse or overreliance have directly led to fatal accidents, constituting harm to persons. The article explicitly connects the AI system's role in the accidents and discusses the overmarketing of these systems as full autonomous driving, which contributes to unsafe use. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to persons. The discussion of regulatory responses and calls for better consumer education are complementary but secondary to the primary incident of harm.
Thumbnail Image

品牌创始人车祸身亡 自动驾驶惹的祸? 蔚来回应

2021-08-16
中国经济网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—an autonomous driving assistance system—whose use directly led to a fatal accident. The system's inability to detect a stationary obstacle on the highway caused the collision and death, fulfilling the criteria of an AI Incident due to direct harm to a person. The article also references prior similar incidents and discusses the risks and safety concerns of such AI systems, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

技术人员私自接触涉案车辆被传唤?蔚来最新回应:假的!公司没有删改任何数据的行为

2021-08-16
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an autonomous or semi-autonomous vehicle with data recording capabilities. However, the article does not report any confirmed AI system malfunction or misuse causing harm. The alleged unauthorized data access is denied by the company, and no evidence of data tampering or deletion is confirmed. The article mainly provides updates on the investigation, company statements, regulatory context, and market reactions. Since no direct or indirect harm from AI system development, use, or malfunction is established, and no plausible future harm is indicated, the event does not meet the criteria for an AI Incident or AI Hazard. Instead, it fits the definition of Complementary Information as it enhances understanding of the ongoing situation and responses related to AI systems in vehicles.
Thumbnail Image

从特斯拉到蔚来:如何逃离自动驾驶“死亡魔咒”?

2021-08-17
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of AI-based automatic driving assistance systems (NOP by NIO and similar systems by Tesla and others) that have directly led to multiple fatal accidents, including the recent death of a driver. The AI system's failure to detect static obstacles and the resulting collisions constitute direct harm to human life, fulfilling the criteria for an AI Incident. The article provides detailed evidence of realized harm caused by the AI system's malfunction and discusses the broader implications and regulatory responses, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

警方通报蔚来车祸事件:将在调查基础上做出责任认定

2021-08-18
internet.cnmo.com
Why's our monitor labelling this an incident or hazard?
The NIO NOP feature is an AI system providing autonomous driving capabilities. The accident occurred while this AI system was in use, leading directly to harm (death and injury). The involvement of the AI system in the vehicle's operation at the time of the crash and the resulting fatalities meet the criteria for an AI Incident, as the AI system's use directly led to harm. The ongoing investigation and concerns about data handling do not negate the fact that harm has occurred linked to the AI system's use.
Thumbnail Image

早报:小米MIX4今天上午正式开售 蔚来车祸数据公布

2021-08-16
phone.cnmo.com
Why's our monitor labelling this an incident or hazard?
The NIO Pilot is an AI-based automatic driving assistance system, so AI system involvement is present. However, the article only shares data from the accident without attributing the cause to the AI system or indicating any malfunction or misuse. There is no direct or indirect harm caused by the AI system reported, nor is there a plausible risk of harm from the AI system described. The main focus is on providing data and a reminder about driver responsibility, which fits the definition of Complementary Information rather than an Incident or Hazard. The other news items about product launches are unrelated to AI harm.
Thumbnail Image

两周内两位蔚来驾驶人事故身亡,“自动辅助”要背锅?

2021-08-16
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's automated driving assistance system (NOP) at the time of fatal accidents, which directly led to harm (deaths and vehicle damage). The AI system's involvement is clear, and the harm is realized, not just potential. Although the company states that NOP is an assistance system and not full autonomous driving, the system's use is a contributing factor in the accidents. This fits the definition of an AI Incident, as the AI system's use directly led to injury or harm to persons and property damage.
Thumbnail Image

警惕!"自动辅助驾驶"不是"自动驾驶"

2021-08-18
华声在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an automatic driving assistance feature, which is an AI system that influences vehicle control. The accident resulted in a fatality, which is a direct harm to a person caused by the use (and likely misuse or misunderstanding) of the AI system. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm. The article also discusses systemic issues around user education and regulatory responses, but the primary focus is on the incident and its consequences, not just complementary information or potential hazards.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡 行车数据披露

2021-08-15
青岛新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NIO Pilot and navigation-assisted driving (L2 and L3 level automated driving features). The driver was using these AI-based driving assistance functions when the fatal accident occurred. The harm is realized (death of the driver), and the AI system's role is pivotal as it was engaged during the incident. Although the exact cause is under investigation, the AI system's use and potential malfunction or limitations are directly linked to the harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

“自动驾驶”没那么简单,车企更应审慎

2021-08-18
青岛新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving feature with AI-based navigation and control functions) whose use directly led to a fatal traffic accident, causing harm to a person. The article explicitly states the accident occurred while the AI-assisted driving function was active. This meets the definition of an AI Incident as the AI system's use directly led to injury and death. The article also discusses systemic issues of mislabeling and safety risks, reinforcing the incident's significance. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

股价重挫、亏损环比扩大30%!超500名蔚来车主发声力挺

2021-08-18
证券之星
Why's our monitor labelling this an incident or hazard?
The NIO ES8's NOP/NP system is an AI-based assisted driving system. The fatal accident occurred while the system was engaged, directly causing harm (death). This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also references regulatory safety management and company responses, but the primary focus is the incident and its consequences, not just complementary information. Therefore, this event is classified as an AI Incident.
Thumbnail Image

最新进展!莆田交警通报蔚来自动驾驶事件

2021-08-18
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO vehicle's automatic driving mode (NOP). The accident caused direct harm: one death and one injury, which fits the definition of harm to persons. The AI system's use at the time of the accident indicates its involvement in the incident. The discussion about misleading marketing and overreliance on the system further supports the AI system's role in causing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

备受关注的蔚来自动驾驶事件又有新进展了!8月18日,莆田交警通报蔚来自动驾驶事件称,公安交警部将在事故调查基础上依法做出责任认定。通报指出,8月12日14时18分许,林某某驾驶闽CD5XXx号电动汽车在沈海高速公路莆田市涵江段追尾碰撞前方同车道由李某某驾驶正在施工作业的闽A95XXX号轻型普通货车后侧,造成电动汽车驾驶人林某某当场死亡,货车乘车人王某某受伤,两车不同程度损坏的道路交通事故。

2021-08-18
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO vehicle's autonomous driving mode (NOP) being active during a fatal collision. The harm includes death and injury, fulfilling the criteria for an AI Incident. The incident stems from the use of the AI system, and the system's role is pivotal in the harm caused. The discussion about misleading marketing and user overreliance further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

8月15日,马斯克在推特上表示,FSD Beta 9.2系统在最后时刻出现问题,应该会在明天或者后天推出(8月16日-17日)。

2021-08-16
证券之星
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Tesla's FSD Beta and other autonomous driving systems) and discusses safety concerns and regulatory investigations, which relate to potential risks. However, no direct or indirect harm caused by Tesla's FSD Beta 9.2 system is reported. The fatal accident mentioned is linked to a different company's autopilot system, not Tesla's. The main focus is on update delays, regulatory scrutiny, and general safety concerns rather than a specific AI Incident or imminent hazard. Thus, it fits the definition of Complementary Information, as it provides supporting context and updates about AI system development, safety concerns, and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

理想汽车创始人兼CEO李想发文,呼吁媒体和行业机构统一自动驾驶中文名词的标准,避免夸张的宣传造成用户使用的误解。

2021-08-17
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance and partial autonomous driving technologies (L2, L3 levels). It reports a fatal accident linked to the use of such an AI system (NIO's NOP) and multiple injuries and a death linked to Tesla's Autopilot system under investigation. These are direct harms to human health caused by the use or malfunction of AI systems. Additionally, the article discusses the risk of user misunderstanding due to misleading marketing, which has contributed to misuse and overreliance on these AI systems, further supporting the classification as an AI Incident. The CEO's call for terminology standardization is a governance response but does not overshadow the primary focus on incidents causing harm.
Thumbnail Image

8月17日(周二)美股盘前,新能源汽车板块走低。

2021-08-17
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving/driver assistance) that was in use when a fatal accident occurred, directly causing harm to a person. The company's clarification that their system is not full autonomous driving and the ongoing investigations into Tesla's similar systems further confirm the AI system's involvement and potential malfunction or misuse. The fatality constitutes injury or harm to a person, fulfilling the criteria for an AI Incident. The other details about terminology standardization and investigations are complementary but do not negate the incident classification.
Thumbnail Image

知名企业家驾驶蔚来ES8遭遇车祸身亡 最新官方声明来了

2021-08-17
证券之星
Why's our monitor labelling this an incident or hazard?
The NIO ES8's autonomous driving feature is an AI system as it involves automated driving decisions. The fatal accident directly resulted from the use of this AI system, causing harm (death) to a person. The ongoing investigation and dispute over data integrity further highlight the AI system's central role in the incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use or malfunction.
Thumbnail Image

一个名为“美一好”的个人公众号发布讣告称:其品牌管理公司创始人林文钦先生,驾驶蔚来ES8汽车启用自动驾驶功能(NOP领航状态)后,在沈海高速涵江段发生交通事故,不幸逝世,终年31岁。

2021-08-15
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NOP assisted driving feature, which is an AI-based driver assistance system. The use of this system directly preceded a fatal accident, resulting in the death of the driver, which is a clear harm to a person. The AI system's role is pivotal as the accident occurred while the system was active. Even though the exact cause is under investigation, the direct link between the AI system's use and the fatal harm meets the criteria for an AI Incident. The event is not merely a potential hazard or complementary information, but a realized harm involving AI.
Thumbnail Image

英为财情Investing.com –周一美股盘前,中国“造车新势力”股价走低,蔚来汽车 (NYSE:NIO)跌超3%,理想汽车 (NASDAQ:LI)跌3%,小鹏汽车 (NYSE:XPEV)跌超4%。

2021-08-16
证券之星
Why's our monitor labelling this an incident or hazard?
The NIO NOP system is an AI system providing assisted driving capabilities. The fatal crash involving a vehicle using this system indicates harm to a person linked to the use of an AI system. Although the company states NOP is not full autonomous driving, it is an AI-based driver assistance system. The accident and resulting death meet the criteria for an AI Incident as the AI system's use has directly or indirectly led to injury or harm to a person. The ongoing investigation and company response do not negate the occurrence of harm.
Thumbnail Image

8月16日(周一)美股盘前,新能源汽车板块走低。

2021-08-16
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system component—autonomous driving technology—in a fatal car accident, which constitutes direct harm to a person. The company's statement and the regulatory measures indicate the AI system's role in the incident and the need for oversight. The harm has already occurred, making this an AI Incident rather than a hazard or complementary information. The regulatory update is related but secondary to the primary incident of harm caused by the AI system's use or malfunction.
Thumbnail Image

知名企业家“萌剑客”车祸去世,车型蔚来ES8

2021-08-15
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Navigate on Pilot (NOP) driver assistance feature, which is an AI-enabled system that controls vehicle navigation and driving assistance. The use of this system directly preceded and contributed to a fatal traffic accident, causing harm to a person. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also discusses other related accidents involving AI-enabled vehicles, reinforcing the systemic safety concerns. Therefore, this event is classified as an AI Incident.
Thumbnail Image

导读:自动驾驶并非“一蹴而就”,自动驾驶的最终目的是让汽车与道路更加安全。一次次的事故提醒全行业和所有车企紧绷神经,时刻保持技术探索与安全之间的平衡。

2021-08-16
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP) used in real-world driving that has directly led to fatal accidents, causing injury and death (harm to persons). The system's inability to detect static obstacles and the resulting crashes demonstrate a malfunction or limitation in the AI system's operation. The article provides detailed evidence of realized harm caused by the AI system's use, not just potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

辅助驾驶等同于自动驾驶? 蔚来否认篡改数据

2021-08-17
auto.3news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-enabled assisted driving system (NOP) during a fatal accident, which caused harm to a person (death). The AI system's role is central to the event, as the accident occurred while the system was active, and there is public debate about the system's capabilities and user understanding. The event involves the use of an AI system leading to injury or death, fitting the definition of an AI Incident. Although the investigation is ongoing, the harm has already occurred, and the AI system's involvement is direct and pivotal. The article also discusses regulatory and educational responses, but the primary focus is the incident itself.
Thumbnail Image

独家 时评|蔚来汽车又发生事故!自动辅助不等于自动驾驶,切莫... 时评 1710 16分钟前

2021-08-16
thehour.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an automatic driving assistance system (NOP) in a NIO vehicle, which was in use at the time of a fatal traffic accident. The harm (death and severe injury) has directly resulted from the use of this AI system. The article highlights the misuse or misunderstanding of the system's capabilities, which contributed to the incident. This fits the definition of an AI Incident because the AI system's use directly led to injury and death. The discussion about manufacturer responsibility and user misunderstanding supports the causal link. Hence, the classification is AI Incident.
Thumbnail Image

为自动驾驶加上“生命安全锁”

2021-08-17
大洋网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP assisted driving system) whose malfunction (failure to detect an obstacle and initiate braking) directly led to physical harm (the death of a person). This fits the definition of an AI Incident, as the AI system's malfunction directly caused injury or harm to a person. The article also discusses regulatory and industry responses, but the primary focus is on the incident and its implications for safety. Therefore, the classification is AI Incident.
Thumbnail Image

蔚来声明:公司无任何删改数据行为,也无员工被警方传唤

2021-08-16
Baidu.com
Why's our monitor labelling this an incident or hazard?
Although the vehicle likely contains AI systems (e.g., autonomous driving or driver assistance), the article does not state that the AI system malfunctioned or caused the accident. The company's statement denies data tampering and police summons of employees, and the data extraction is part of the investigation. No harm is attributed to AI system failure or misuse. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on the investigation and company responses related to an AI system without reporting new harm or plausible future harm.
Thumbnail Image

31岁企业家林文钦因车祸离世 好友:事发时驾驶蔚来ES8

2021-08-15
Baidu.com
Why's our monitor labelling this an incident or hazard?
The automatic driving feature (NOP navigation) is an AI system that makes real-time decisions to control the vehicle. The fatal accident occurred while this AI system was engaged, directly causing harm to the driver. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. Although the investigation is ongoing, the report clearly states the AI system was active and involved in the crash, which resulted in death, fulfilling the criteria for an AI Incident.
Thumbnail Image

蔚来出死亡事故,31岁企业家“自动驾驶”致死

2021-08-16
Baidu.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an automatic driving feature (NOP pilot state) in a NIO ES8 vehicle, which qualifies as an AI system under the definition of AI systems performing autonomous navigation. The accident caused the death of the driver, which is a direct harm to a person. Therefore, this qualifies as an AI Incident due to the AI system's involvement in the fatal accident.
Thumbnail Image

从特斯拉到蔚来的致死事故,是自动驾驶与人性弱点的对抗

2021-08-16
Baidu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an automatic driving function (NOP navigation state) in a NIO vehicle that was involved in a fatal crash. This autonomous driving feature is an AI system as it makes real-time decisions to control the vehicle. The accident caused the death of the driver, which is a direct harm to a person. Hence, the event meets the criteria for an AI Incident due to the AI system's involvement in causing harm through its use.
Thumbnail Image

蔚来车祸:不要将自己陷于道德的僵局

2021-08-18
Baidu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (NIO's NP/NOP assisted driving system) and a fatal traffic accident resulting in a person's death. The AI system's involvement is under investigation but is central to the incident and public concern. The harm (death) has occurred, and the AI system's use is directly linked to the event, fulfilling the criteria for an AI Incident. The article also discusses the company's response and ethical responsibilities, but the primary focus is the incident itself, not just complementary information or future hazards.
Thumbnail Image

蔚来汽车驶入“十字路口”

2021-08-17
cb.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot (NOP) autonomous driving assistance system, which was active during the fatal accident. The harm is direct and severe (death of the driver). The AI system's use and possible malfunction or failure to act appropriately led to the incident. The article details the accident and the ongoing investigation, indicating realized harm caused or contributed to by the AI system. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来ES8事故的启示 自动驾驶还需自己手动!

2021-08-17
chinatimes.net.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO Pilot's Navigate On Pilot, an advanced driver assistance system with AI capabilities) that was active during a fatal traffic accident, leading to the death of the driver. The article clearly states that the AI system's limitations and the user's overreliance on it contributed to the harm. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death (harm to a person). The article also discusses regulatory responses and the need for better safety communication, but the primary focus is on the incident and its consequences, not just complementary information or potential hazards. Hence, the classification is AI Incident.
Thumbnail Image

致命的“自动驾驶”:蔚来ES8撞车事故背后,全行业走到了十字路口

2021-08-17
iceo.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system with multiple sensors and algorithms) whose use directly led to a fatal traffic accident, fulfilling the criteria for an AI Incident. The harm is realized (death of a person), and the AI system's malfunction or misuse is a contributing factor. The article also discusses regulatory and industry responses but the primary focus is the incident itself and its consequences. Therefore, this is classified as an AI Incident.
Thumbnail Image

从高雅宁到林文钦,我们需要改变什么、敬畏什么

2021-08-18
中文导报
Why's our monitor labelling this an incident or hazard?
The events described involve AI systems (L2 assisted driving/autopilot systems) whose use or misuse has directly resulted in fatal accidents, causing harm to individuals. The article explicitly links the accidents to the activation and overreliance on these AI driving assistance systems. The harm is realized (deaths), and the AI system's role is pivotal. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The article also discusses governance and industry responses but the primary focus is on the incidents and their causes.
Thumbnail Image

林文钦驾驶蔚来ES8时车祸离世 媒体:自动驾驶 别揣着明白装糊涂

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the assisted or automatic driving feature of the NIO ES8 vehicle. The fatal accident directly resulted from the use of this AI system, as the driver was using the automatic driving function when the crash occurred. The article emphasizes the risks of misunderstanding and overtrusting such AI driving aids, which led to harm (death). This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also discusses the broader implications and risks of AI driving systems, but the primary focus is on the realized harm from this specific incident.
Thumbnail Image

特斯拉突遭美国政府正式调查,涉及70多万车辆!

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Tesla's Autopilot and NIO's NOP) used in vehicles that have been involved in multiple fatal accidents, causing injury and death. The U.S. government has launched a formal investigation into Tesla's Autopilot system due to these incidents. The harms described include injury and death to persons, which fits the definition of an AI Incident. The AI systems' malfunction or limitations in recognizing static obstacles have directly or indirectly led to these harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

自动驾驶的几个死线问题:立法、城市规划、调度、驾校培训

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology and discusses their use and deployment challenges. It identifies multiple systemic problems that could plausibly lead to AI incidents, such as accidents caused by conflicting algorithms, lack of legal frameworks, and mixed traffic environments. However, no actual harm or incident has been reported as having occurred yet. Therefore, the event fits the definition of an AI Hazard, as it describes circumstances where AI systems' development and use could plausibly lead to harm in the future if not addressed.
Thumbnail Image

又一例!31岁蔚来车主车祸去世,曾启用"自动驾驶"功能!自动驾驶被夸大了吗?

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP assisted driving system) that was active during a fatal car crash, directly linking the AI system's use to harm (death of the driver). The system is an AI-based driver assistance technology that influences vehicle control decisions. The article details the system's limitations and the risks of overreliance or misunderstanding of its capabilities, which are factors contributing to the incident. Therefore, this is an AI Incident as the AI system's use has directly led to harm to a person.
Thumbnail Image

李想发文呼吁统一"自动驾驶"中文定义

2021-08-17
新浪车行天下
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the vehicle's autonomous driving system) whose malfunction or failure to perform safely directly led to a fatal injury, fulfilling the criteria for an AI Incident. The article explicitly describes the harm (death) caused by the AI system's use. The discussion about terminology standardization and marketing practices is complementary but secondary to the primary incident of harm caused by the AI system's malfunction or misuse.
Thumbnail Image

知名企业创始人车祸去世!一个月内两位蔚来汽车驾驶人事故身亡...

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO ES8's assisted driving feature (NOP). The use of this AI system directly led to a fatal accident and property damage, fulfilling the criteria for harm to persons and property. The incident is not speculative or potential but has already occurred, making it an AI Incident rather than a hazard or complementary information. The involvement of the AI system in the accident is clear and central to the harm caused.
Thumbnail Image

蔚来NIO Pilot首起致死车祸!90后企业家车主高速事故丧生

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NIO Pilot's Navigate on Pilot (NOP) automated driving feature, which is an AI-based advanced driver assistance system. The fatal accident occurred while this AI system was active, directly linking the AI system's use to the harm (death of the driver). This constitutes an AI Incident because the AI system's use directly led to injury and death, fulfilling the criteria of harm to a person. The article also discusses regulatory responses, but the primary focus is the fatal incident caused by the AI system's operation.
Thumbnail Image

"夺命"的自动驾驶,深陷死亡车祸的蔚来破防了

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP driver assistance system) that was active during a fatal car accident, directly linking the AI system's use to harm (death). The controversy over misleading marketing and potential overreliance on the system further supports the AI system's role in the incident. The ongoing investigation into data handling by the manufacturer also relates to the AI system's development and use. Given the direct harm and AI involvement, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故敲碎自动驾驶幻想?工信部新规明确车企义务

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP) operating at the time of a fatal crash. The AI system's limitations and insufficient safety measures contributed to the accident, causing death and injury, which qualifies as harm to persons. The article also discusses regulatory responses to address these harms and prevent future incidents. Since the harm has occurred and is directly linked to the AI system's use and malfunction, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

致命的"自动驾驶":蔚来ES8撞车事故背后,全行业走到了十字路口

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP assisted driving feature) whose use directly led to a fatal traffic accident, fulfilling the criteria for an AI Incident. The system's inability to detect static obstacles and the overreliance or misunderstanding by the user contributed to the harm. The article details the malfunction or limitations of the AI system, the resulting injury (death), and the broader implications for industry practices and regulation. Therefore, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

为什么"自动驾驶"看不见停着的车?

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of Level 2 automated driving assistance technologies using millimeter-wave radar and vision-based AI perception. It details how these systems' technical limitations have directly led to accidents involving stationary vehicles, including fatalities and injuries. The involvement of AI in the development and use of these systems, and the resulting harm, meets the criteria for an AI Incident. The article also mentions official investigations, further confirming the recognized harm linked to AI system use.
Thumbnail Image

500名蔚来车主声明:辅助驾驶宣传未对我们构成误导

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (assisted driving systems) and their marketing, but no direct or indirect harm has been reported or can be inferred. The owners explicitly state they were not misled, and the legal case is still under investigation. Therefore, this is not an AI Incident or AI Hazard. The main focus is on public statements and legal proceedings related to AI system marketing and user perception, which fits the category of Complementary Information as it provides context and updates on societal and governance responses to AI-related issues.
Thumbnail Image

新能源车企们的"自动驾驶",究竟有多危险?

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the automatic driving function (NOP) active during a fatal crash. The AI system's use directly contributed to the harm (death of the driver). The article also discusses systemic issues with AI-assisted driving systems, including overreliance, misleading marketing, and regulatory challenges, all related to AI system use and malfunction. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person. The detailed discussion of the accident and its consequences confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

企业家车祸离世,被质疑的NOP领航功能到底是什么?

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP) used in a vehicle that was engaged at the time of a fatal car accident, resulting in the death of the driver. The system is described as an AI-assisted driving system with autonomous features such as lane keeping and automated lane changes, which fits the definition of an AI system. The harm (death) has occurred, and the AI system's involvement is direct or indirect in the incident. Although the investigation is ongoing and the exact cause is not yet confirmed, the presence and use of the AI system at the time of the accident and the resulting fatality meet the criteria for an AI Incident. The event is not merely a hazard or complementary information because the harm has materialized, and it is not unrelated as the AI system is central to the event.
Thumbnail Image

超500名蔚来车主联合声明:蔚来辅助驾驶系统没有误导我们

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The article centers on clarifying misunderstandings about the assisted driving system and includes regulatory and industry responses to the incident and marketing practices. It does not report a new AI incident or hazard but rather provides complementary information about the ecosystem, user education, and governance measures following a known accident. Therefore, it fits the definition of Complementary Information, as it enhances understanding and tracks responses without describing a new incident or hazard.
Thumbnail Image

特斯拉蔚来同陷漩涡,是时候反思自动驾驶了

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance and semi-autonomous driving systems (Tesla Autopilot, NIO Pilot, etc.) that have directly caused fatal accidents, fulfilling the criteria for harm to persons. The involvement is through the use and malfunction of these AI systems. The investigation by NHTSA and the reported accidents in China confirm realized harm. The article also discusses misleading marketing and insufficient user education, which contribute indirectly to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来否认删改数据,"买前自动驾驶,出事辅助驾驶",造车新势力接连发声

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the NOP driving assistance system) that was active during a fatal car accident, leading to the death of the driver. The AI system's role in the accident is central, as it was engaged at the time of the crash, and there are allegations and concerns about the system's capabilities, marketing, and user understanding. The harm (death) has occurred, fulfilling the criteria for an AI Incident. The event also discusses regulatory and industry responses, but the primary focus is on the fatal incident linked to the AI system's use, not just complementary information or potential hazards. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

电动车分支"自动驾驶"又摊上大事了,蔚来紧急回应,或对A股相关板块形成冲击

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO Pilot, a Level 2 assisted driving system) whose use directly led to a fatal accident, causing harm to a person. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to a person. The article also mentions regulatory measures and company responses, but these are complementary to the main incident. Therefore, the classification is AI Incident.
Thumbnail Image

企业家"萌剑客"车祸去世 蔚来回应:正在调查

2021-08-14
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system: the NIO Navigate on Pilot (NOP) driver assistance feature, which is an AI-based automated driving aid. The fatal accident occurred while this system was engaged, indicating the AI system's involvement in the incident. The harm is direct and severe (death of the driver), fulfilling the criteria for an AI Incident. The article also references other accidents involving AI-assisted driving features from the same manufacturer, reinforcing the pattern of harm linked to AI system use. The ongoing investigations and regulatory focus further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来的"特斯拉式"困局:安全与科技应用,哪个是头等大事?

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (NIO Pilot, Tesla Autopilot/FSD) involved in fatal crashes, with investigations underway. The AI systems' malfunction or limitations (e.g., inability to detect obstacles) have directly led to injury and death, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in these accidents. Hence, the event is classified as an AI Incident.
Thumbnail Image

31岁创业者使用辅助驾驶功能车祸致死:蔚来强调"NOP不是自动驾驶"

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver-assistance system (NOP) that was active during a fatal traffic accident. The harm (death of a person) has occurred and is directly linked to the use of this AI system. Although the system is not fully autonomous, its malfunction or misuse (overreliance or misunderstanding of its capabilities) contributed to the incident. Therefore, this qualifies as an AI Incident under the definition of harm to a person caused directly or indirectly by the use of an AI system.
Thumbnail Image

蔚来又出死亡事故,31岁企业家"自动驾驶"致死始末

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP advanced driver-assistance system) actively controlling the vehicle at the time of a fatal accident, which directly caused the death of the driver. The AI system's malfunction or limitations in this semi-autonomous mode contributed to the harm. The article details the circumstances, the system's capabilities, and the ongoing investigation, confirming the AI system's role in the incident. This meets the criteria for an AI Incident as the AI system's use directly led to injury or harm to a person.
Thumbnail Image

31岁企业家车祸去世谁之责?车主好友:希望尽快拿到行车数据

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO's NOP assisted driving feature) during a fatal traffic accident, which directly caused harm to a person (the driver's death). The AI system's role is pivotal as the accident occurred while the system was active, and the investigation focuses on whether the system or driver actions contributed to the crash. The article also highlights issues of data access and transparency related to the AI system's operation. Therefore, this is an AI Incident as per the definitions, since the AI system's use directly led to injury and death.
Thumbnail Image

企业家驾驶蔚来车祸身亡:什么是领航辅助?车企宣传模糊?

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving feature, which is an AI-based driver assistance system. The fatal accident occurred while the system was active, and the system's inability to detect a highway maintenance vehicle likely contributed to the crash. This directly led to the death of the driver, fulfilling the criteria for an AI Incident involving injury or harm to a person. The article also highlights the lack of clear user education and the misleading marketing of assisted driving as full autonomous driving, which are relevant contextual factors but do not negate the direct harm caused. Hence, the event is classified as an AI Incident.
Thumbnail Image

不要用自动驾驶考验人性

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP mode) that was active during a fatal car crash. The harm (death of the driver) directly resulted from the use of this AI system, which failed to prevent the accident and required human supervision that was insufficient or ineffective. The article also highlights systemic issues with driver overreliance and misunderstanding of AI capabilities, which contributed to the harm. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also references regulatory and industry responses, but the primary focus is on the incident and its implications, not just complementary information.
Thumbnail Image

蔚来汽车频陷安全风波 L2级自动驾驶暴露识别认知缺陷

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—NIO's L2-level automated driving system—and its use directly led to fatal accidents, constituting harm to persons (a). The system's failure to detect stationary obstacles and the resulting crashes demonstrate a malfunction or limitation in the AI system's capabilities. The harm is realized and significant, meeting the criteria for an AI Incident. Although investigations are ongoing, the direct link between the AI system's operation and the accidents is clear from the description. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来事故启示录:被夸大的和被误导的自动驾驶

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically the automatic driving assistance systems (NOP, NGP, NOA) used in electric vehicles. The accidents described resulted in fatalities, which constitute harm to persons (a). The AI systems' malfunction or limitations, combined with overtrust by users influenced by misleading marketing, directly contributed to these harms. Therefore, this qualifies as an AI Incident. The article also references regulatory measures as complementary information but the primary focus is on the realized harms from AI system use.
Thumbnail Image

企业家驾驶蔚来车祸离世,被质疑的NOP到底是什么?

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI-based advanced driver-assistance system (NOP) during a fatal car accident, which constitutes harm to a person. The AI system's involvement is explicit and central to the event. Although the investigation is ongoing and the exact role of the AI system is not yet confirmed, the use of the AI system at the time of the fatal incident and the resulting death meet the criteria for an AI Incident. The article does not merely discuss potential risks or future hazards but reports a realized harm linked to the AI system's use. Therefore, it is classified as an AI Incident.
Thumbnail Image

智能驾驶的风口浪尖:半年吸金千亿元,资本为何蜂拥扎堆

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot) used in a real-world driving scenario that directly led to a fatal accident, which is a clear harm to a person. The AI system's involvement is explicit and central to the incident. The harm is realized and severe (death). Therefore, this qualifies as an AI Incident. The article also discusses broader industry context and investment trends, but these do not overshadow the primary event of the fatal accident caused by the AI system's use. Hence, the classification is AI Incident.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡 行车数据披露

2021-08-16
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically the automated driving assistance features of the NIO ES8 vehicle. The fatal accident occurred while these AI features were active, directly leading to the death of the driver, which is a harm to a person. The article discusses the AI system's role, the level of autonomy, and the ongoing investigation into the accident cause, indicating the AI system's involvement in the harm. Therefore, this qualifies as an AI Incident under the definition of an event where the use or malfunction of an AI system has directly led to harm to a person.
Thumbnail Image

蔚来最大危机:连发两起致死事故,多位车主称开自动辅助出车祸

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of NIO's AI-based automatic driving assistance systems (NOP and NP) during the fatal accidents. The AI system's failure to detect obstacles and the resulting crashes have directly caused harm to human life, fulfilling the definition of an AI Incident. The event involves the use and malfunction of AI systems leading to injury and death, which is a primary harm category. The detailed description of multiple accidents, driver testimonies, and ongoing investigations further confirm the direct link between AI system use and harm. Hence, the classification as an AI Incident is justified.
Thumbnail Image

31岁蔚来车主追尾身亡!祸在自动驾驶,罪在过度宣传

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automatic driving technology) whose use directly led to a fatal injury (harm to a person). The article explicitly states the driver was using the automatic driving feature when the rear-end collision occurred, resulting in death. This meets the definition of an AI Incident because the AI system's use directly caused harm. The article also highlights issues of overpromising and misleading marketing, which contributed indirectly to the harm by fostering overreliance on the system. Therefore, this is classified as an AI Incident.
Thumbnail Image

蔚来致死事故后,希望"L2.5"们也跟着去世。

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (NOP) with AI capabilities for navigation and obstacle detection. The system malfunctioned by failing to recognize an obstacle, causing a fatal crash. The article also discusses the misuse and overreliance on such AI systems due to misleading marketing, which contributed indirectly to the harm. The harm is realized (death of the driver), and the AI system's malfunction and the resulting misuse are pivotal factors. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm to a person caused by the development and use of an AI system.
Thumbnail Image

聚焦四大争议!31岁企业家驾蔚来车祸离世,祸因"自动驾驶"谁之责?

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP, an AI-assisted driving system) whose use directly led to a fatal traffic accident, causing harm to a person. The article details the incident, the AI system's role, and the ongoing investigation into the cause and responsibility. This fits the definition of an AI Incident because the AI system's use has directly led to injury and death. The discussion about the system's capabilities, user reliance, and manufacturer responsibility further supports this classification. Therefore, this is an AI Incident.
Thumbnail Image

失明的老司机

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned: NIO's NOP driver assistance system, which is an AI-based L2 autonomous driving aid. The system was in use at the time of the fatal accident, and its limitations (e.g., inability to detect stationary obstacles) are highlighted as contributing factors. The death of the driver constitutes injury or harm to a person, fulfilling the harm criteria. The article also mentions potential tampering with vehicle data by company technicians, which relates to the development or use of the AI system and its investigation. Given the direct link between the AI system's use and the fatal harm, this event is classified as an AI Incident.
Thumbnail Image

蔚来事故的背后到底是什么?

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOP assisted driving feature, which is a Level 2 AI-based driver assistance system. The system's failure to detect a slow-moving maintenance vehicle directly led to a fatal collision, causing harm to a person. The article also discusses the system's limitations and the need for driver oversight, confirming that the AI system's malfunction or inadequacy was a contributing factor. This meets the criteria for an AI Incident, as the AI system's use directly led to injury and death. The article also touches on legal and ethical issues around data handling and responsibility, reinforcing the incident classification.
Thumbnail Image

事故又起,蔚来的领航之路还能走多远?

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NIO's automatic driving assistance system) that directly led to fatal traffic accidents, causing harm to human life (harm category a). The article details multiple incidents where the AI system was active during the accidents, indicating direct involvement. It also discusses the system's limitations and the overtrust by users, which contributed indirectly to the harm. The presence of the AI system and its malfunction or limitations causing death fits the definition of an AI Incident. The article also references regulatory responses, but the primary focus is on the realized harm from the AI system's use, not just future risks or complementary information.
Thumbnail Image

蔚来,从领跑到跟跑

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions NIO's autonomous driving features (referred to as 'pilot assist' or 'automatic driving') and links them to two fatal accidents within a short period. The deaths of drivers in these accidents represent injury or harm to persons, fulfilling the harm criteria. The AI system's involvement is direct, as the accidents occurred while using the AI-enabled driving assistance. This meets the definition of an AI Incident, as the AI system's use has directly led to harm. Other parts of the article discuss financial and market performance but do not negate the incident classification for the accidents described.
Thumbnail Image

"自动"驾驶?别忽悠了

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO vehicle's NOP driver assistance system) whose use directly led to a fatal accident, constituting harm to a person. The article also discusses systemic issues of overtrust and misleading marketing of AI driving assistance systems, which have caused multiple accidents and fatalities, including those involving Tesla's Autopilot. The AI system's malfunction or misuse is a contributing factor to these harms. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use and the broader pattern of harm from similar AI systems in the industry.
Thumbnail Image

蔚来撞车事故背后,车企自动驾驶宣传被指打"擦边球"

2021-08-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic driving assistance system (NOP) that was active during a fatal crash. The AI system's failure to detect a stationary vehicle on the highway directly contributed to the collision and death of the driver, fulfilling the criteria for an AI Incident involving injury or harm to a person. The article also discusses misleading marketing and lack of clear communication about the system's capabilities and risks, which are relevant to the use and development of the AI system. The harm is realized and directly linked to the AI system's malfunction or limitations, not merely a potential risk, so it is not an AI Hazard or Complementary Information. Hence, the classification is AI Incident.
Thumbnail Image

31岁企业家开蔚来自动驾驶出车祸去世!家属质疑迟迟拿不到车辆数据

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the NIO Pilot automatic assisted driving system (NOP)—which was engaged at the time of the fatal crash. The AI system's use directly contributed to the harm (death of the driver) by controlling critical driving functions during the accident. The family's difficulty in obtaining vehicle data for investigation further highlights issues related to the AI system's role and accountability. The harm is realized and significant (fatality), and the AI system's involvement is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

蔚来是否会陷入"特斯拉式"危机

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the NIO Pilot's Navigate on Pilot (NOP) assisted driving feature, which is an AI-enabled Level 2 driving assistance system. The fatal accident occurred while the system was active, directly linking the AI system's use to a harm event (death of a person). The article also references multiple similar incidents involving AI-assisted driving systems, underscoring the systemic risk and realized harm. Therefore, this is a clear case of an AI Incident as the AI system's use directly led to injury and death, fulfilling the criteria for harm to a person under the AI Incident definition.
Thumbnail Image

31岁企业家驾驶蔚来车祸身亡,行车数据披露

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the NIO Pilot and Navigation Pilot autonomous driving assistance features, which are AI-based systems providing Level 2 and Level 3 driving assistance. The accident directly led to the death of a person, which is a harm to health. The AI system's use (autonomous driving features) is a contributing factor in the incident, as the driver was using these features at the time of the crash. Although the exact cause is under investigation, the involvement of AI driving assistance and the fatal outcome qualifies this as an AI Incident under the framework, as the AI system's use directly led to harm (death).
Thumbnail Image

知名品牌创始人驾驶蔚来ES8不幸身亡,李斌表示悼念!

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the NIO Pilot automated driving assistance system—being used during the fatal accident. The system's role is central as it was engaged in Navigate on Pilot mode, which automates driving functions. The death of the driver constitutes injury or harm to a person, fulfilling the harm criteria for an AI Incident. The article discusses the system's capabilities, limitations, and the ambiguity in marketing that may have contributed to overreliance on the AI system. The involvement of the AI system in the accident is direct and causal, meeting the definition of an AI Incident rather than a hazard or complementary information. The event is not merely a warning or potential risk but a realized harm caused by the AI system's use.
Thumbnail Image

车企莫把"辅助驾驶"夸大为"自动驾驶"

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOP driver assistance system) whose use is linked to a fatal accident, indicating direct or indirect harm to a person. The article emphasizes the dangers of misrepresenting assisted driving as autonomous driving, which can cause users to misuse or overtrust the system, leading to injury or death. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm (death) and violations of safety expectations. The article also discusses regulatory responses, but the primary focus is on the incident and its implications, not just complementary information or future hazards.
Thumbnail Image

知名品牌创始人驾驶蔚来ES8不幸身亡 李斌表示悼念

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot automated driving assistance system, which was active during the fatal crash. The harm (death of the driver) directly resulted from the use of this AI system. The article provides detailed information about the AI system's capabilities and the accident circumstances, confirming the AI system's involvement in the incident. This meets the criteria for an AI Incident because the AI system's use directly led to injury or harm to a person. Although the system is described as an assistance system rather than full autonomous driving, its malfunction or limitations contributed to the fatal outcome. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

辅助驾驶不是自动驾驶!专家划重点:过度宣传易造成错误认知

2021-08-18
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOP driver assistance) whose use directly contributed to a fatal accident, fulfilling the criteria for an AI Incident as the AI system's malfunction or misuse indirectly led to injury and death (harm to a person). The article also discusses the broader context of AI system deployment, user misconceptions, and regulatory responses, but the primary focus is on the incident and its implications. Hence, it is not merely complementary information or a hazard but an AI Incident.
Thumbnail Image

蔚来ES8汽车再出恶性事故 车主启用自动驾驶功能身亡

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system, the NOP autonomous driving feature, which is an AI system designed to control vehicle navigation and driving tasks. The fatal accident occurred while this AI system was engaged, indicating a malfunction or failure in the AI system's operation. This directly led to the death of the driver, which is a clear harm to a person. Therefore, this event qualifies as an AI Incident due to the direct causal link between the AI system's use and the fatal harm.
Thumbnail Image

蔚来的NOP,连着要了两个人的命

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI system providing automated driving assistance. The article details multiple accidents where the use of NOP contributed to fatal or serious injuries, indicating direct harm caused by the AI system's malfunction or limitations. The involvement of the AI system in causing physical harm to users and others meets the criteria for an AI Incident. The article also mentions regulatory scrutiny, but the primary focus is on realized harm, not just potential or responses, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

蔚来事件引发的思考:人工智能不应是"无人"智能

2021-08-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the automatic driving function of the NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person, which fits the definition of an AI Incident. The article also references prior similar incidents involving AI-assisted driving systems causing fatalities, reinforcing the classification. The discussion of regulatory and safety issues supports the context but does not change the primary classification. Therefore, this is an AI Incident due to the direct harm caused by the AI system's use in autonomous driving.
Thumbnail Image

蔚来ES8车祸定责待调查 "自动驾驶"滥用引行业深思

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's NOP driver assistance system) whose use directly led to a fatal traffic accident, causing injury and death (harm to a person). The article details the AI system's role, its technological limitations, and the risks of misuse or misunderstanding by consumers. This meets the criteria for an AI Incident because the AI system's use has directly led to harm. Although the investigation is ongoing, the fatality and the AI system's involvement are clear. The article also discusses systemic issues in the industry and regulatory responses, but the primary event is the fatal accident caused during AI-assisted driving.
Thumbnail Image

蔚来ES8车祸定责待调查 "自动驾驶"滥用引行业深思

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NIO Pilot assisted driving system using neural networks and sensor data) whose use directly led to a fatal car accident, thus causing harm to a person. The article details the system's capabilities, limitations, and the risks of misuse or overreliance by drivers, which are factors contributing to the incident. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (death) and raises significant safety and regulatory concerns.
Thumbnail Image

蔚来高速自动辅助驾驶致死!网友曝特斯拉高速自动驾驶打王者

2021-08-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The NIO accident involved the use of an AI-based assisted driving system (NOP), which directly led to a fatal crash, constituting an AI Incident due to injury and death caused by the AI system's use or malfunction. The mention of Tesla drivers misusing Autopilot by playing games while driving indicates a pattern of misuse of AI driving assistance that creates significant safety risks and has caused accidents before, also qualifying as AI Incidents. Therefore, the event is classified as an AI Incident because the AI systems' use and misuse have directly or indirectly caused harm to human life.
Thumbnail Image

蔚来车主车祸去世,别让自动驾驶成马路杀手

2021-08-16
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically an autonomous driving assistance system (NOP mode) that was active during the accident. The system's failure to detect a stationary vehicle and prevent collision directly led to a fatal injury, fulfilling the criteria for an AI Incident due to harm to a person. The article also discusses systemic issues with autonomous driving safety and consumer misuse, reinforcing the direct link between the AI system's malfunction/use and the harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

自动驾驶的坑,要用多少人命来填?

2021-08-15
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NIO Pilot's NOP automatic driving feature, which was active at the time of the accident. The system's failure to detect a stationary highway maintenance vehicle directly led to a fatal collision, constituting injury and harm to a person. This meets the definition of an AI Incident, as the AI system's malfunction during use directly caused harm. The article also references prior similar incidents, reinforcing the classification. Therefore, this event is an AI Incident.
Thumbnail Image

蔚来回应企业家车祸身亡:按程序交资料,无任何删改数据行... 2021-08-16 20:14

2021-08-16
sznews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NIO's Navigate on Pilot, an AI-based driver assistance system) being used at the time of a fatal accident. The harm (death) has occurred, and the AI system's role is pivotal as the accident happened while the AI feature was active. The company's statement and investigation focus on the AI system's involvement. Therefore, this is an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

企业家“萌剑客”驾驶蔚来车祸离世

2021-08-16
topic.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The NOP feature in the NIO ES8 is an AI-based driver assistance system that controls vehicle navigation and driving functions. The fatal accident happened while this system was active, indicating the AI system's malfunction or failure to prevent harm. The death of the driver is a direct harm to a person caused by the use of an AI system. Although the investigation is ongoing, the direct link between the AI system's use and the fatal outcome is clear. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

蔚来车主自动驾驶车祸身亡,莆田交警通报

2021-08-18
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—an autonomous driving assistance system (NOP) in a NIO vehicle—that was active at the time of the fatal accident. The AI system failed to detect a stationary vehicle on the highway, which directly led to the collision and the driver's death, fulfilling the criteria for harm to a person. The report also discusses the system's technical limitations and the risks of overreliance on such technology, confirming the AI system's role in causing the harm. Hence, this is an AI Incident as the AI system's malfunction and use directly caused injury and death.
Thumbnail Image

盯紧了:变盘即将来临 “牛市旗手”能否走出万

2021-08-15
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving feature in a vehicle. The use of this AI system directly led to a fatal traffic accident, causing injury and death, which fits the definition of an AI Incident (harm to a person). The article clearly links the AI system's use to the harm, and the harm has materialized. Other parts of the article discuss market and investment analysis unrelated to AI harm classification, but the core event is the fatal accident involving the AI system. Hence, the classification is AI Incident.
Thumbnail Image

马红漫:“宁王”大跌!但跌的理由你可能猜错了

2021-08-17
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in autonomous driving (Tesla's Autopilot and NIO's autonomous driving features) and the resulting fatal accidents, which constitute injury and harm to persons. The investigation into Tesla's system and the fatal accident in China demonstrate that the AI system's malfunction or failure has directly led to harm. This fits the definition of an AI Incident, as the AI system's use has directly caused injury and death. The discussion about the challenges of autonomous driving and regulatory recognition further supports the assessment that this is a realized harm scenario, not just a potential risk.
Thumbnail Image

新能源泡沫正在破裂,自动驾驶只是开始!特斯拉

2021-08-18
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of autonomous driving and driver-assist technologies (e.g., Tesla's Autopilot and NIO's assisted driving). It reports on actual accidents, including a fatal crash involving a vehicle with active assisted driving, and ongoing official investigations into these AI systems' safety. These events have directly led to harm (fatality) and regulatory scrutiny, fulfilling the criteria for an AI Incident. The harm is both physical (death in a traffic accident) and reputational/financial (market value loss, investigations). Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

又出事了!

2021-08-15
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the assisted driving feature (NOP) in a NIO ES8 vehicle. The use of this AI system directly led to a fatal traffic accident, causing harm to a person, which fits the definition of an AI Incident. The article also highlights the system's limitations and user expectations, reinforcing the causal link between AI system use and harm. Although the car company disputes the system being fully autonomous, the AI-assisted driving system's involvement is clear and pivotal in the incident.
Thumbnail Image

自动驾驶“欲速则不达”!勿拿消费者当“小白鼠”,将社会当试验场

2021-08-18
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (automatic driving/driver assistance system) whose malfunction or failure to detect a hazard has directly led to a fatal accident, causing harm to a person. This fits the definition of an AI Incident because the AI system's use or malfunction has directly led to injury or harm to a person. The article also discusses regulatory and societal responses, but the primary focus is on the incident and its implications, not just complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

Nio denies tampering with data after fatal crash

2021-08-18
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Nio's Navigate on Pilot, an advanced driver assistance system) that was active during a fatal crash, directly leading to harm (death of the driver). The incident is under investigation, and the AI system's role is pivotal in the event. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The company's statements and the context of similar investigations into driver assistance systems further support this classification.
Thumbnail Image

Electric car accident raises autopilot concerns

2021-08-16
China Daily
Why's our monitor labelling this an incident or hazard?
The autopilot navigation system is an AI system involved in the accident. The death of the driver is a direct harm caused by the use of this AI system. The article explicitly states the accident occurred after activating the autopilot system, indicating the AI system's involvement in the harm. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. The article also references similar past incidents and the current limitations of autonomous driving levels, reinforcing the context of AI-related harm.
Thumbnail Image

Nio's Autopilot, NOP, Faces Intense Scrutiny With First Fatal Crash in China

2021-08-17
autoevolution
Why's our monitor labelling this an incident or hazard?
NOP is an AI system as it provides advanced driver assistance with autonomous features at Level 2, involving real-time decision-making and control. The fatal crash directly resulted from the use of this AI system, causing injury and death, which fits the definition of an AI Incident. The overreliance on the system by the driver and the system's failure to prevent the crash demonstrate the AI system's involvement in harm. The investigation and allegations of data erasure further highlight the AI system's central role in the incident. Hence, this event is classified as an AI Incident.
Thumbnail Image

Update: EV-Maker Nio Denies Tampering With Vehicle Data After Fatal Crash

2021-08-18
caixinglobal.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the automated driving function (NOP) active during the fatal crash. The harm (death of a person) has occurred and is linked to the AI system's use and possible malfunction or limitations. The article also references the system's operational constraints and the lack of automatic emergency braking at the speed involved. Given the direct link between the AI system's use and the fatal injury, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nio ES8 Driver Dies While Driving with NOP Pilot - Pandaily

2021-08-15
Pandaily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Nio's NOP pilot) actively engaged during a fatal traffic accident, leading to the death of the driver. The system's limitations in responding to static obstacles and the lack of mandatory driver training or instructional videos contributed to the incident. The harm (death) is directly linked to the use of the AI system, fulfilling the criteria for an AI Incident under the definition of injury or harm to a person caused by the use or malfunction of an AI system.
Thumbnail Image

Why Nio Stock Rose Today | The Motley Fool

2021-08-24
The Motley Fool
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear from the description of the assisted-driving feature (Navigation on Pilot) that aids driving tasks. The fatal accident involving a vehicle using this feature constitutes harm to a person, fulfilling the criteria for an AI Incident. The company's response with a driver test is a complementary measure but does not negate the incident classification. The article focuses on the accident and its implications, not just the response or stock movement, confirming the event as an AI Incident.
Thumbnail Image

Nio Stock Won't Rebound Soon, and Its Future Is Far From Settled

2021-08-26
InvestorPlace
Why's our monitor labelling this an incident or hazard?
The mention of a fatal autopilot crash directly involves an AI system (autonomous driving technology) whose malfunction or failure has led to injury or death, which qualifies as an AI Incident under the framework. Although the article's main focus is financial analysis, the underlying event of the fatal crash is an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

After Fatal Crash, Nio Demands Owners to Take a Test Before Using NOP

2021-08-24
autoevolution
Why's our monitor labelling this an incident or hazard?
The NOP system is an AI-based advanced driver-assistance system that influences vehicle navigation and control. The fatal crash of a user while using NOP indicates direct harm to a person caused by the AI system's use. The company's response to require a test before use is a mitigation measure following the incident. Since the AI system's use directly led to injury and death, this qualifies as an AI Incident under the framework, specifically harm to a person resulting from the AI system's use.
Thumbnail Image

Chinese auto-maker accused of altering data after fatal autonomous car accident

2021-08-24
theregister.com
Why's our monitor labelling this an incident or hazard?
The presence of a level-2 autonomous driving system qualifies as an AI system. The fatal collision while the AI system was engaged constitutes harm to a person. The alleged tampering with vehicle data by the manufacturer’s employees after the crash implicates the AI system's use and post-incident handling, which directly relates to the harm caused. Therefore, this event meets the criteria for an AI Incident due to the realized harm and the AI system's involvement in the event and subsequent data handling.
Thumbnail Image

Police investigate claims EV maker Nio tampered with car data after crash · TechNode

2021-08-23
TechNode
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car crash involving a vehicle with an active driver-assistance feature, which is an AI system. The investigation concerns alleged tampering with data from this AI system after the crash, which is directly related to the harm (death) caused. The AI system's malfunction or misuse (data tampering) is linked to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Nio Releases Assisted Driving Safety Tests to Car Owners - Pandaily

2021-08-25
Pandaily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the NOP Pilot assisted driving system, which is an AI system involved in vehicle control. The event is a company initiative to educate users about the system's limitations and safe use after a fatal accident linked to the system. This initiative aims to reduce future harm by improving user understanding and safe operation. Since no new harm or plausible future harm is described here, and the main focus is on the company's response and safety education, this fits the definition of Complementary Information rather than an AI Incident or AI Hazard.