XPeng Assisted Driving System Fails to Detect Hazard, Causes Fatal Accident in Ningbo

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An XPeng vehicle in Ningbo, China, crashed while its lane centering assisted driving system was active and failed to detect a stalled vehicle ahead, resulting in injuries and fatalities. XPeng is cooperating with authorities in the investigation, highlighting risks associated with AI-assisted driving systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the driving assistance system) that was active during the accident. The collision caused a fatality, which is a direct harm to a person. The involvement of the AI system in the vehicle's operation and the resulting death qualifies this as an AI Incident under the definition of harm to health caused by AI system use.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesConsumer products

Affected stakeholders
Consumers

Harm types
Physical (death)Physical (injury)Economic/PropertyReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

小米造车最新进展来了!雷军:自动驾驶目标2024年进入第一阵营

2022-08-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI technology. However, it does not describe any incident or event where the AI system has caused harm or malfunctioned, nor does it indicate a plausible risk of harm occurring imminently. The information is primarily an update on development progress and strategic investment, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

热门中概股周四收盘大多走高 新能源汽车股延续涨势

2022-08-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the driving assistance system) that was active during the accident. The collision caused a fatality, which is a direct harm to a person. The involvement of the AI system in the vehicle's operation and the resulting death qualifies this as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

辅助驾驶又闯祸了?网传小鹏P7在宁波高架发生撞人致死事故!小鹏回应

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (driving assistance system using visual and radar fusion) whose malfunction (failure to recognize stationary obstacles and a person) directly led to a fatal accident. The harm is realized (death of a person), and the AI system's role is pivotal in the chain of events. Therefore, this qualifies as an AI Incident under the framework, as it caused injury or harm to a person due to AI system malfunction during use.
Thumbnail Image

百度自动驾驶决策共享专利公布 可提高车辆间共享效率

2022-08-10
东方财富网
Why's our monitor labelling this an incident or hazard?
The patent relates to an AI system involved in autonomous driving decision-making and sharing among vehicles. However, the event only reports the publication of a patent and does not describe any realized harm or incident resulting from the use or malfunction of the AI system. There is no indication of direct or indirect harm, nor a plausible immediate risk of harm from this patent publication alone. Therefore, this is a development in AI technology with no current or imminent harm, fitting the category of Complementary Information as it provides context on AI advancements and potential future improvements in autonomous vehicle coordination.
Thumbnail Image

·小米披露自动驾驶进展:首期研发投入33亿元

2022-08-13
光明网
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (autonomous driving technology) but does not report any actual or potential harm caused or plausibly caused by the AI system. It is a disclosure of progress and investment in AI technology, which is informative but does not meet the criteria for AI Incident or AI Hazard. Therefore, it is best classified as Complementary Information, as it provides context and updates on AI development without describing harm or risk of harm.
Thumbnail Image

中概股简报:小鹏P7高架撞人致死冲上热搜,疑似车主发声:千万别相信辅助驾驶了

2022-08-11
The Wall Street Journal - China
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving a vehicle operating in assisted driving mode, which is an AI system application. The death of a person is a clear harm to health (a), and the AI system's use is implicated as a potential cause. Even though the investigation is ongoing, the direct link between the AI system's use and the fatality is sufficiently established to classify this as an AI Incident. The presence of the AI system in the assisted driving function and the resulting fatality meet the criteria for an AI Incident under the OECD framework.
Thumbnail Image

交通部:运输经营自动驾驶和高度自动驾驶汽车应当配备驾驶员

2022-08-10
chinaz.com
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of AI systems in autonomous vehicles and the regulatory framework to ensure safety. However, it does not report any actual harm or incident caused by AI systems, nor does it describe a specific event where AI malfunction or misuse led to harm. Instead, it presents precautionary guidelines to mitigate potential risks. Therefore, this is best classified as Complementary Information, providing governance and safety context related to AI systems in autonomous driving.
Thumbnail Image

"狂飙"的小鹏汽车 "失控"的辅助驾驶 - cnBeta.COM 移动版

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, namely the advanced driver assistance systems (LCC and ACC) in the Xiaopeng P7 vehicle, which are AI-based systems designed to assist driving by perceiving the environment and controlling the vehicle. The accident occurred while these systems were active or expected to be active, and the failure to detect and warn about the hazard directly contributed to the fatal collision. This constitutes direct harm to a person caused by the malfunction or limitations of an AI system. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

交警回应小鹏汽车撞人致死:暂不确认车祸为辅助驾驶功能造成 - cnBeta.COM 移动版

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving a vehicle using an AI-based assisted driving system (LCC). The AI system was active during the incident, and the accident caused death, which is a direct harm to a person. Although the police have not confirmed the AI system caused the accident, the system's malfunction or failure to alert contributed to the harm. Therefore, this meets the criteria for an AI Incident due to direct harm linked to the AI system's use and malfunction.
Thumbnail Image

Waymo推出新功能 以改善残障乘客的乘坐体验 - cnBeta.COM 移动版

2022-08-10
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles to assist disabled passengers, which is explicitly described. However, the article does not report any harm or incident resulting from the AI system's development, use, or malfunction. Instead, it focuses on positive developments and improvements in accessibility. There is no indication of realized or potential harm, nor any legal or rights violations. Therefore, this is not an AI Incident or AI Hazard. The article provides complementary information about AI system deployment and societal benefits, fitting the definition of Complementary Information.
Thumbnail Image

为什么被追尾的总是Apollo? - cnBeta.COM 移动版

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The events involve the use of an AI system—Baidu's Apollo autonomous driving system. The incidents are actual collisions where the AI system was involved in traffic accidents, though the fault was attributed to other drivers. The accidents caused property damage and potential safety risks, which qualifies as harm. Since the AI system's use directly relates to these harms (even if the AI was not at fault), and the article discusses the safety implications and limitations of the AI system, this constitutes an AI Incident. The article does not merely discuss potential risks or general AI developments but reports on real accidents involving AI systems causing harm (property damage and risk to persons).
Thumbnail Image

百度"萝卜快跑"无人车出车祸 后轮被撞掉 - cnBeta.COM 移动版

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event describes a traffic accident involving a fully autonomous vehicle operating without a safety driver, which implies the AI system's malfunction or failure contributed to the incident. The AI system's use directly led to harm (damage to vehicles), meeting the criteria for an AI Incident. Although no explicit mention of injuries is made, the damage to vehicles and the nature of the accident qualify as harm to property and possibly to persons. Therefore, this event is classified as an AI Incident.
Thumbnail Image

小鹏P7车主撞人致死让人害怕:专家看后给判断 主动刹车AEB出问题 - cnBeta.COM 移动版

2022-08-13
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The vehicle's advanced driver assistance system, which includes AI components such as sensor fusion, radar processing, and autonomous emergency braking, failed to prevent a collision that resulted in a fatality. The AI system's malfunction or failure to act was a direct contributing factor to the harm. The description explicitly references the AI system's role and its failure, meeting the criteria for an AI Incident due to injury to a person caused by the AI system's malfunction during use.
Thumbnail Image

造车新势力们,别在智能驾驶上吹牛了 - cnBeta.COM 移动版

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LCC, ACC, NOP, AEB) used in vehicles that have been involved in fatal accidents, with direct or indirect causation of harm to human life. The AI systems' malfunction or limitations in handling emergency situations, combined with user reliance, have led to injury and death, fulfilling the criteria for AI Incidents. Although investigations are ongoing, the harm has already occurred. The article also discusses misleading marketing claims but the primary classification is based on the realized harm from AI system use.
Thumbnail Image

小米自动驾驶能力逐帧解析,雷军33亿花得值吗? - Xiaomi 小米 - cnBeta.COM

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article describes Xiaomi's autonomous driving system as an AI system with advanced capabilities and ongoing development. However, it does not mention any event where the AI system caused injury, property damage, rights violations, or other harms. Nor does it describe any near-miss or credible risk of harm that has materialized or is imminent. The article is primarily an informative update on the state of Xiaomi's AI-driven autonomous driving technology and its strategic context, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

小鹏P7辅助驾驶追尾前车 有人被撞飞 现场监控视频流出 - IT 与交通 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the Xiaopeng P7's advanced driver assistance system, which is an AI system as it performs real-time decision-making and control to assist driving. The accident occurred while the system was active, and the system failed to prevent the collision or provide warnings, which directly contributed to the harm caused (injury and possibly death). Therefore, this qualifies as an AI Incident because the AI system's malfunction or failure to act led directly to injury and harm to persons.
Thumbnail Image

小鹏销售带客户体验"自动驾驶":时速70怼上前车、气囊爆了 - 硬件 - cnBeta.COM

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The ACC system is an AI system that assists with vehicle speed and distance control. The sales representative's use of ACC in a test drive directly caused a collision, resulting in property damage and airbag deployment, which is a safety hazard. This constitutes an AI Incident because the AI system's use directly led to harm (property damage and potential injury). The event is not merely a hazard or complementary information, as the harm has already occurred due to the AI system's involvement.
Thumbnail Image

重庆、武汉无人出租车新政出台 开放程度比肩美国加州 - IT 与交通 - cnBeta.COM

2022-08-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems (autonomous driving) in robotaxis. Although the article does not describe any realized harm or incident, the policy change enabling fully driverless commercial operation in complex urban environments plausibly increases the risk of AI-related incidents such as accidents or safety failures. Therefore, this constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to harm, even if no harm has yet occurred.
Thumbnail Image

小鹏汽车辅助驾驶致死,谁之过? - IT 与交通 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an assisted driving system (XPILOT 2.5) that was active during the fatal accident. The AI system's limitations in sensing and failure to detect driver distraction are directly linked to the incident causing death, fulfilling the criteria for an AI Incident. The article discusses the system's development, use, and malfunction leading to harm, and the harm (fatality) has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

小鹏智能驾驶致命车祸:高速撞向静止车辆 光线良好 系统里里外外失效 - 视点·观察 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was operating in intelligent driving mode when it failed to detect a stationary vehicle on the highway, did not alert the driver, and did not engage emergency braking, resulting in a fatal collision. The AI system's malfunction (perception and driver monitoring failures) directly led to harm (death), fulfilling the definition of an AI Incident. The involvement of AI is clear from the description of the intelligent driving system, its sensors, and functions. The harm is realized and severe (fatality), and the AI system's failure is a direct contributing factor.
Thumbnail Image

小鹏P7高架撞人致死 疑似车主发声:千万别相信辅助驾驶了 - IT 与交通 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NGP2.5 assisted driving feature, which is an AI system designed to assist driving tasks such as lane keeping and collision warning. The incident resulted in a fatality, which is a direct harm to a person. The driver indicates that the AI system failed to detect the hazard, which implies malfunction or failure to act. Therefore, the AI system's malfunction directly led to harm, meeting the criteria for an AI Incident.
Thumbnail Image

小鹏P7高速撞人剖析:不能识别静物?防撞系统失效?风险提示是否到位? - IT 与交通 - cnBeta.COM

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (XPILOT 2.5+ with ACC and LCC) used in an autonomous driving assistance context. The system's failure to detect and respond to a stationary obstacle (a person and vehicle) directly led to a fatal collision, fulfilling the harm criteria (injury or harm to persons). The article also discusses the failure of complementary AI safety features (DSM, FCW, AEB) to prevent the accident. The involvement of the AI system in the development, use, and malfunction stages is clear. The harm is realized and significant. Hence, the event meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

女司机让特斯拉自动驾驶 车内小孩惊叹:妈妈开车竟然不用手 - 人物 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Tesla's Level 2 automated driving assistance) that directly leads to a significant safety hazard. The driver's overreliance and distraction while the AI system is engaged create a real risk of injury or harm to persons, fulfilling the criteria for an AI Incident. The presence of a child and the driver's inattentiveness highlight the direct link to potential harm. The ongoing investigations and regulatory actions further support the seriousness of the incident.
Thumbnail Image

雷军:小米汽车100%自研自动驾驶 先砸它33个亿 - Xiaomi 小米 - cnBeta.COM

2022-08-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article describes Xiaomi's development and testing of an AI system for autonomous driving, which is an AI system by definition. However, there is no mention of any harm or incident caused by this AI system. The article focuses on the development progress and investment, with no indication of realized or potential harm. Therefore, this event is best classified as Complementary Information, providing context on AI development and deployment in the automotive sector without reporting an incident or hazard.
Thumbnail Image

智能汽车行业:百度获得无人驾驶出租车服务许可,智...

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (autonomous driving AI in Baidu's driverless taxis and intelligent vehicle sensor suites). The use of AI is in commercial operation, but no harm or incident is reported. The article highlights policy changes enabling such use and the potential acceleration of intelligent vehicle adoption. Since no direct or indirect harm has occurred, and the article does not describe any near misses or plausible imminent harm, it does not qualify as an AI Incident or AI Hazard. The article mainly provides contextual and developmental information about AI in autonomous vehicles and related investment opportunities, fitting the definition of Complementary Information.
Thumbnail Image

交警回应小鹏P7车主高架开辅助驾驶功能撞人致死:仍在调查

2022-08-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the assisted driving feature (LCC) of the Xiaopeng P7 car, which is an AI system that infers from sensor inputs to assist driving. The malfunction or failure of this AI system to detect a hazard and warn the driver directly contributed to a fatal accident, causing harm to a person. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to injury and death. The ongoing investigation does not negate the realized harm already reported.
Thumbnail Image

自动驾驶政策征求意见点评:《自动驾驶汽车运输安全...

2022-08-10
东方财富网
Why's our monitor labelling this an incident or hazard?
The article centers on the publication and solicitation of public comments on a draft safety guideline for autonomous vehicles, describing regulatory and technological progress without reporting any realized harm or imminent risk. While autonomous driving systems involve AI, the content is about policy and industry development rather than an event causing or plausibly leading to harm. Therefore, it qualifies as Complementary Information, providing context and updates on AI governance and ecosystem evolution rather than describing an AI Incident or AI Hazard.
Thumbnail Image

小米造车新进展:首批140辆自动驾驶测试车 后年进入第一阵营

2022-08-11
东方财富网
Why's our monitor labelling this an incident or hazard?
Xiaomi's autonomous driving technology clearly involves AI systems, as indicated by references to perception algorithms, high-precision mapping, and autonomous vehicle testing. The deployment of 140 test vehicles on public roads implies real-world use of AI systems. However, the article does not mention any accidents, injuries, rights violations, or other harms resulting from these AI systems. The event is about ongoing development and testing, which could plausibly lead to AI incidents in the future if failures or misuse occur. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

小鹏P7车主高架桥上开辅助驾驶功能撞到人!车企称有伤亡

2022-08-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's LCC is an AI system designed to assist with lane centering but not full autonomous driving. The incident describes a direct collision caused by the system's failure to detect a stationary vehicle and pedestrian, leading to injury and death. The AI system's malfunction or inability to handle this scenario directly contributed to the harm. The involvement of the AI system in the development, use, and malfunction stages is clear, and the harm is realized. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

小鹏P7撞人致死祸起辅助驾驶?专家称使用该功能需时刻注意

2022-08-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident where the AI-assisted driving system (XPILOT 2.5) failed to detect a stationary vehicle and did not alert the driver, who was momentarily distracted. This failure directly contributed to the collision causing death, which is a harm to a person. The AI system's involvement is clear and causal, meeting the definition of an AI Incident. Although the system is an L2-level assist and requires driver attention, the system's limitations and malfunction played a pivotal role in the harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

嘉定裕民南路智慧道路开工建设 年底可实现L4级自动驾驶车路协同

2022-08-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (L4 autonomous driving and vehicle-road collaboration) but only describes the start of construction and future capabilities without any realized harm or direct risk. There is no indication of injury, rights violations, disruption, or other harms occurring or imminent. Therefore, this is best classified as an AI Hazard because the smart road and L4 autonomous driving system could plausibly lead to AI incidents in the future once operational, but no incident has yet occurred.
Thumbnail Image

乘自动驾驶东风!惯性导航系统已落地多家造车新势力 这些上市公司已布局

2022-08-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article focuses on the deployment and market prospects of inertial navigation systems in autonomous vehicles, which are AI-enabled systems. It mentions government policies and commercial pilot programs for autonomous driving, including fully driverless operations. However, it does not report any actual harm, malfunction, or misuse resulting from these AI systems. The content is primarily about technological progress, market potential, and regulatory developments, which fits the definition of Complementary Information. There is no direct or indirect harm described, nor a plausible immediate risk of harm from the information presented. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

小米造车500天 业内人士:自动驾驶技术没有惊喜

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's autonomous driving AI) in development and testing phases. However, the article does not describe any realized harm or incident caused by the AI system. Instead, it reports on the progress, technical challenges, and skepticism from industry experts about Xiaomi's autonomous driving capabilities and goals. There is no indication of direct or indirect harm, nor a credible imminent risk of harm from the AI system at this stage. Therefore, the article is best classified as Complementary Information, providing context and updates on AI development and industry responses rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

交警回应小鹏汽车撞人致死:暂不确认车祸为辅助驾驶功能造成

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (intelligent driving assistance, LCC) during a fatal accident. The AI system's failure to provide warnings as expected and the driver's reliance on it contributed to the incident causing injury and death. Although the investigation is not conclusive, the AI system's involvement in the harm is direct or indirect. Therefore, this qualifies as an AI Incident due to injury and harm to persons linked to the AI system's use and malfunction.
Thumbnail Image

智能汽车伤亡事故频出 自动驾驶能否真正实现解放双手?

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of advanced driver-assistance and autonomous driving technologies. It reports actual traffic accidents where these AI systems were active or involved, leading to collisions and potential injuries or harm. The discussion of system limitations, sensor fusion challenges, and user overreliance further supports that the AI systems' malfunction or insufficient performance contributed to the incidents. Therefore, the event meets the criteria for an AI Incident due to direct or indirect harm caused by AI system use or malfunction.
Thumbnail Image

萝卜快跑旗下自动驾驶汽车被追尾 百度作出回应

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as an autonomous driving vehicle operated by Baidu's Luobo Kuaipao. The accident occurred during normal testing, and the AI system was involved in the use phase. Although the harm was limited to property damage and no injuries occurred, the AI system's involvement in the accident is direct. Therefore, this qualifies as an AI Incident due to the realized harm (damage to the vehicle) caused during the use of an AI system.
Thumbnail Image

小鹏汽车“冰火两重天”:G9订单24小时破2万 LCC事故引发责任界定争议

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (LCC autonomous driving assistance) that directly led to a serious traffic accident causing injury. The AI system failed to detect a stopped vehicle and did not brake in time, contributing to the harm. The event involves the use and malfunction of the AI system, resulting in injury to a person, which fits the definition of an AI Incident. The discussion about legal responsibility and regulatory gaps further supports the significance of the incident. Although the article also mentions the successful sales of another AI-equipped vehicle model, this is background context and does not detract from the classification of the accident as an AI Incident.
Thumbnail Image

小米造车有了新进展 下一步要着重整车制造了吗?

2022-08-12
东方财富网
Why's our monitor labelling this an incident or hazard?
Xiaomi's autonomous driving project involves AI systems actively under development and testing, which could plausibly lead to future AI incidents if malfunctions or misuse occur. However, the article does not describe any actual harm or incidents caused by these AI systems. It mainly reports on progress, investments, and plans, which fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

若与无人驾驶车相撞 你该怎么办? - 大纪元

2022-08-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article clearly describes real-world accidents involving AI systems (autonomous vehicles) that have caused harm to people (injury to a cyclist), property damage (vehicle damage), and traffic disruptions. The involvement of AI systems is explicit, and the harms have materialized, meeting the criteria for AI Incidents. The discussion of regulatory responses and safety investigations supports the classification but does not override the presence of incidents. Therefore, the event is classified as an AI Incident.
Thumbnail Image

小米造车的"虚与实"

2022-08-13
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's development and strategic positioning in autonomous driving and car manufacturing, describing investments, technology capabilities, and market plans. It does not describe any event where an AI system caused harm or malfunctioned leading to harm, nor does it present a credible risk of future harm. The AI system involvement is present (autonomous driving AI), but only in a developmental and strategic context without harm or plausible harm. Therefore, this is Complementary Information as it provides context and updates on AI development and corporate strategy without reporting an AI Incident or AI Hazard.
Thumbnail Image

雷军:小米自动驾驶首期33亿研发投入,2024年进入第一阵营

2022-08-11
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (autonomous driving technology) but does not report any harm or malfunction leading to injury, rights violations, or other harms. The article highlights ongoing testing and investment without any indication of accidents, failures, or risks materializing. Therefore, it does not qualify as an AI Incident. While there is potential future risk inherent in autonomous driving technology, the article does not emphasize or warn about plausible future harm or hazards. The main focus is on progress and strategic development, which fits the category of Complementary Information as it provides context and updates on AI ecosystem developments without reporting harm or credible risk of harm.
Thumbnail Image

造车 500 天后,雷军宣布小米自动驾驶要在 2024 年进入第一阵营

2022-08-11
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's development and testing of an autonomous driving AI system, highlighting investment and technical capabilities. There is no mention of any realized harm, malfunction, or misuse of the AI system. While autonomous driving AI systems have inherent risks, the article does not describe any event where harm has occurred or is imminent. Therefore, this is a report on AI system development and progress without direct or plausible harm, fitting the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

现场太可怕!小鹏汽车回应P7高架撞人致死:对遇难者感到悲痛和惋惜

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (LCC assisted driving) that failed to detect a hazard (a broken-down vehicle and a person), leading directly to a fatal injury. The AI system's malfunction and its use in the vehicle are central to the incident. This meets the definition of an AI Incident as the AI system's malfunction and use directly caused harm to a person.
Thumbnail Image

汽车云,自动驾驶的隐秘战场

2022-08-12
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the infrastructure and service ecosystem supporting AI for autonomous driving, emphasizing cloud computing's role in enabling large-scale data processing, model training, and simulation. It does not describe any event where AI system development, use, or malfunction has directly or indirectly caused harm (physical, legal, or societal). Nor does it identify a credible risk of imminent harm from these AI systems. The content is primarily informative and analytical, providing complementary information about AI ecosystem developments and industry responses. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

【全球热门公司周报】小鹏回应P7辅助驾驶事故;比亚迪刀片电池已供应特斯拉;首批自动驾驶全无人商业运营牌照下发;首个国产抗新冠口服药每瓶270元;快手发布B端业务品牌

2022-08-12
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the assisted driving system in the Xpeng P7 vehicle. The accident caused injury and death, which is a direct harm to a person. The AI system's use is directly linked to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person.
Thumbnail Image

女司机让特斯拉自动驾驶!车内小孩惊叹:妈妈开车竟然不用手

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The Tesla Autopilot system is an AI system providing Level 2 driving assistance, which requires active driver supervision. The driver in the video is misusing the system by not paying attention and allowing the car to drive itself on a curve, which is unsafe and can lead to accidents. The presence of a child in the car heightens the potential harm. The article references past accidents and regulatory investigations linked to this AI system's use. Therefore, the event involves the use and misuse of an AI system that directly leads to a risk of injury or harm, fitting the definition of an AI Incident.
Thumbnail Image

西媒:大众汽车外购自动驾驶技术

2022-08-13
cankaoxiaoxi.com
Why's our monitor labelling this an incident or hazard?
The event describes Volkswagen purchasing AI-enabled autonomous driving technology to be deployed in their vehicles. While this involves AI systems and their future use, the article does not report any realized harm or incidents caused by these AI systems. The focus is on the acquisition and planned integration of AI technology, which could plausibly lead to future AI-related risks but does not describe any current harm or malfunction. Therefore, this qualifies as an AI Hazard, reflecting the plausible future risk associated with deploying autonomous driving AI systems.
Thumbnail Image

开启辅助驾驶时分心 小鹏P7高架追尾故障车将人撞飞

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event describes a real accident where an AI-based driving assistance system was active but failed to prevent a collision, leading to serious injury. The AI system's failure to warn or intervene directly contributed to the harm. The driver's distraction and reliance on the system further implicate the AI system's role. This meets the criteria for an AI Incident because the AI system's malfunction and use directly led to injury, fulfilling harm category (a).
Thumbnail Image

小鹏P7高架撞人致死!疑似车主发声:千万别相信辅助驾驶了

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's NGP2.5 system is an AI-based advanced driver assistance system. The accident involved the vehicle failing to detect or respond to a person near a disabled vehicle, leading to a fatal collision. The driver's statement confirms reliance on the AI system and its failure to recognize the hazard. This constitutes direct harm caused by the AI system's malfunction or inadequate performance. The event meets the criteria for an AI Incident as it involves an AI system's use leading directly to injury or death.
Thumbnail Image

小鹏P7辅助驾驶追尾前车 有人被撞飞!现场监控视频流出

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driving assistance system is an AI system as it performs real-time decision-making and control functions such as adaptive cruise control, lane keeping, and collision avoidance. The accident occurred while the system was active, and it failed to warn or prevent the collision, directly causing injury and damage. This meets the criteria for an AI Incident because the AI system's malfunction or failure to act led directly to harm to a person and property. The description confirms the AI system's involvement and the resulting harm, so it is not merely a hazard or complementary information.
Thumbnail Image

小鹏P7汽车撞人致死!到底谁的错?

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's lane-centering and adaptive cruise control functions are AI systems that process sensor data to assist driving. The accident occurred while these AI features were active, and their limitations in detecting static obstacles contributed to the collision. The driver's distraction and failure to intervene timely also played a role, but the AI system's inability to reliably detect and respond to the hazard was pivotal. The harm (death and injury) has materialized, meeting the criteria for an AI Incident. The article also discusses the broader context of AI driving assistance system risks and technical challenges, but the primary focus is on the realized harm caused by the AI system's use and malfunction.
Thumbnail Image

小米汽车自动驾驶首秀:全场景自动驾驶 自动充电

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems for autonomous driving and automated charging robots. However, the article does not mention any harm or incidents resulting from these AI systems, nor does it indicate any plausible immediate risk of harm. It is primarily an update on Xiaomi's AI technology progress and testing, without any realized or imminent harm. Therefore, it qualifies as Complementary Information, providing context and updates on AI system development and deployment without reporting an AI Incident or AI Hazard.
Thumbnail Image

雷军:小米汽车100%自研自动驾驶!先砸它33个亿

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's development and testing of an AI system for autonomous driving in their cars. While no harm has been reported, the deployment and testing of autonomous driving AI systems on public roads inherently carry plausible risks of harm (e.g., accidents, injury) if the system malfunctions or fails. Therefore, this situation represents an AI Hazard due to the credible potential for future harm from the use of the AI system in real-world conditions.
Thumbnail Image

百度“萝卜快跑”无人出租车出车祸 回应:被后车追尾

2022-08-12
China News
Why's our monitor labelling this an incident or hazard?
The event describes a traffic accident involving an autonomous vehicle, which is an AI system. The accident was caused by another vehicle rear-ending the autonomous taxi, so the AI system was involved in the incident but was not at fault. There was property damage but no injury or other harm. Since the AI system's use directly led to property damage (harm to property), this qualifies as an AI Incident under the definition (d).
Thumbnail Image

小鹏P7高架撞人致死:到底谁之过?

2022-08-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Xiaopeng's XPILOT 2.5 lane keeping and adaptive cruise control system) during the accident. The system's failure to detect and respond to a stationary obstacle on the road directly contributed to the fatal collision. The harm (death and injury) has occurred, fulfilling the criteria for an AI Incident. The driver's distraction and failure to take over timely are noted, but the AI system's limitations and malfunction are central to the incident. The article also discusses the broader context of AI driving assistance system risks and technical shortcomings, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AEB主动刹车未介入:一理想ONE高速追尾工程车辆

2022-08-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as active (ACC and lane keeping assist) during the incident. The AI system did not engage automatic braking to avoid collision, leading to a rear-end crash causing harm (vehicle damage and airbag deployment). This qualifies as an AI Incident because the AI system's malfunction or failure to act directly led to physical harm and property damage. The presence of the AI system, its failure to intervene, and the resulting harm meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小鹏P7车主撞人致死让人害怕:专家看后给判断 主动刹车AEB出问题

2022-08-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the assisted driving system (LCC, ACC, NGP) in the Xiaopeng P7 vehicle. The system's failure to detect a stationary vehicle and pedestrian, and the malfunction of the active emergency braking system, directly led to a fatal injury, fulfilling the criteria for harm to a person. The AI system's malfunction is a direct cause of the incident, and the involvement of AI in the development and use phases is clear. Therefore, this is an AI Incident.
Thumbnail Image

雷军33亿花得值吗?小米自动驾驶能力逐帧解析

2022-08-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The article details Xiaomi's autonomous driving technology development and demonstrations, which involve AI systems for perception, decision-making, and control. However, it does not report any realized harm, injury, rights violations, or disruptions caused by these AI systems. It also does not describe any plausible future harm or risks stemming from the technology. Instead, it provides complementary information about the state of Xiaomi's AI system development, investments, and strategic focus. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

小鹏销售带客户体验"自动驾驶":时速70怼上前车、气囊爆了

2022-08-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (adaptive cruise control, ACC) used in a real-world driving scenario. The system's use directly caused a collision with another vehicle, resulting in significant property damage and airbag deployment, which indicates a malfunction or misuse of the AI system. Although no injuries occurred, the harm to property and the risk to human safety are clear. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

百度"萝卜快跑"无人车出车祸 后轮被撞掉

2022-08-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an autonomous vehicle ('Luobo Kuaipao' unmanned taxi) involved in a collision causing damage to itself and another vehicle. The autonomous taxi operates without a safety driver, implying reliance on AI for driving decisions. The damage to vehicles constitutes harm to property, fulfilling the criteria for an AI Incident. The AI system's use and possible malfunction or failure to prevent the accident directly led to the harm. Hence, this event is classified as an AI Incident.
Thumbnail Image

小鹏P7车主撞人致死未自动刹车:深扒事故背后 辅助系统失效可怕

2022-08-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driver assistance system is an AI system that uses multiple sensors and AI algorithms for perception and decision-making. The system failed to detect a stationary vehicle and pedestrian, did not warn the driver, and did not automatically brake, leading to a fatal collision. The AI system's malfunction directly caused harm to a person, fulfilling the criteria for an AI Incident. The event involves the use and malfunction of the AI system, and the harm (death) has occurred, so it is not merely a hazard or complementary information.
Thumbnail Image

雷军晒小米汽车自动驾驶展示视频:500天全力研发 覆盖全场景

2022-08-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's autonomous driving technology, which is an AI system performing complex real-time decision-making tasks such as navigation, obstacle avoidance, and automated charging. However, the event only showcases progress and demonstration without any reported harm or malfunction. There is no indication of injury, rights violations, or other harms caused or imminent. Therefore, this event represents a plausible future risk scenario where the AI system could lead to harm once deployed, but no harm has yet occurred. Hence, it qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

早报|雷军首次揭晓小米自动驾驶/iPhone 14 或提前发布/小鹏回应辅助驾驶事故

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically Xiaopeng's autonomous driving assistance (XPilot with LCC). The accident caused direct harm to a person (fatality), fulfilling the criteria for an AI Incident. The AI system's use and possible failure to warn or prevent the collision are central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小鹏汽车撞人致死,自动驾驶该背锅吗?

2022-08-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7 vehicle uses an AI system for assisted driving (LCC and NGP features). The fatal accident occurred while the AI-assisted driving feature was active, and the driver admitted to being distracted, suggesting overreliance on the AI system. The AI system's malfunction or limitations indirectly contributed to the harm (death). Prior similar incidents and complaints reinforce the pattern of AI-related safety issues. Therefore, this event qualifies as an AI Incident due to direct involvement of an AI system in causing injury and death.
Thumbnail Image

提前告诉你,2024 年小米汽车有这些可能性

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's development and testing of autonomous driving AI systems and related technologies, describing progress and plans without any mention of harm, malfunction, or misuse leading to injury, rights violations, or other harms. It does not describe any event where AI caused or could plausibly cause harm. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides contextual and complementary information about AI development and industry progress, fitting the definition of Complementary Information.
Thumbnail Image

谁该为小鹏P7致命车祸负责?

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the assisted driving features including LCC, ACC, sensors, and driver monitoring system) whose malfunction and failure to detect a stationary obstacle directly led to a fatal injury. The harm is realized (death of a person), and the AI system's role is pivotal in the incident. The discussion of the driver's distraction and the system's limitations further supports the classification as an AI Incident rather than a hazard or complementary information. Therefore, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction during its use.
Thumbnail Image

小米造车500天 业内人士:自动驾驶技术没有惊喜

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's autonomous driving AI system and its development and testing phases, confirming AI system involvement. However, it does not report any injury, property damage, rights violations, or other harms caused by the AI system. The criticisms and doubts are about the system's current performance and future prospects, which do not constitute an AI Incident or AI Hazard. The content mainly provides context, expert opinions, and industry analysis about Xiaomi's AI autonomous driving efforts, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

小鹏汽车撞人,辅助驾驶还敢用吗?

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaopeng P7's driver assistance system, which failed to recognize a stationary vehicle and pedestrian, leading to a fatal crash. This is a direct harm to human life caused by the malfunction of an AI system in use. The article provides detailed evidence of the AI system's role in the incident and the resulting injury and death, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's failure is a contributing factor. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

小鹏P7辅助驾驶追尾前车 现场监控视频流出

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driving assistance system, which includes AI-based features such as adaptive cruise control, lane keeping, automatic emergency braking, and forward collision warning, was engaged during the accident. The system failed to warn or prevent the collision, which directly caused injury (a person was hit and thrown) and property damage (vehicles pushed). This constitutes an AI Incident because the AI system's malfunction or failure to act directly led to harm to persons and property.
Thumbnail Image

宁波一小鹏P7车主被曝高架开辅助驾驶撞人致其身亡!小鹏回应来了

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The incident clearly involves an AI system (the assisted driving system) whose malfunction or failure to warn contributed to a fatal crash. The harm (death of a person) has occurred and is directly linked to the use and malfunction of the AI system. Therefore, this qualifies as an AI Incident under the definition of an event where the use or malfunction of an AI system has directly led to injury or harm to a person.
Thumbnail Image

小鹏汽车回应车主疑辅助驾驶时撞上前车:全力配合调查

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically the assisted driving features LCC and ACC, which use radar and AI algorithms to control vehicle speed and lane positioning. The collision caused injury and death, fulfilling the harm criteria. The AI system's inability to detect a stationary obstacle and the driver's distraction while relying on the system directly contributed to the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and limitations.
Thumbnail Image

交警回应小鹏汽车撞人致死事故:已传唤肇事车主

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving function) that may have contributed to a fatal accident, which is a direct harm to a person. Although the investigation is ongoing and causation is not yet confirmed, the presence of a serious harm and the AI system's potential role meet the criteria for an AI Incident. The event is not merely a hazard or complementary information because harm has occurred and AI involvement is plausible and under investigation.
Thumbnail Image

雷军首次揭晓小米自动驾驶技术:首期33亿研发投入 2024年进入第一阵营

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes the development and testing of an AI system (autonomous driving technology) but does not mention any realized harm or direct/indirect incidents caused by the AI system. Although the technology is in testing and aims to enter the market, there is no indication of plausible harm or hazards at this stage. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is not merely general product news but a disclosure of development progress, which fits best as Complementary Information enhancing understanding of the AI ecosystem and its evolution.
Thumbnail Image

"狂飙"的小鹏汽车 "失控"的辅助驾驶

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system: the LCC (Lane Centering Control) and ACC (Adaptive Cruise Control) driver assistance features, which are AI systems designed to assist driving by perceiving the environment and controlling the vehicle. The accident occurred while these systems were reportedly active or intended to be active, and the failure to timely detect and warn about the obstacle directly contributed to the collision and fatality. This meets the definition of an AI Incident because the AI system's malfunction or limitations led directly or indirectly to injury and death (harm to a person). The article also discusses the broader context of AI-assisted driving risks and regulatory challenges, but the core event is a realized harm caused by AI system use/malfunction, not just a potential hazard or complementary information.
Thumbnail Image

雷军首次揭晓小米自动驾驶,2024成第一梯队/iPhone 14 或提前发布/小鹏回应辅助驾驶事故

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaopeng's XPilot LCC) actively in use at the time of a fatal accident. The AI system's failure to warn or prevent the collision directly contributed to the harm (death of a person). This meets the definition of an AI Incident as the AI system's malfunction or failure in use led directly to injury or harm to a person. The article also references a prior similar incident, reinforcing the significance of the harm caused by the AI system's malfunction. Other parts of the article about Xiaomi's autonomous driving development and product launches do not describe realized harm and thus are not incidents or hazards. Therefore, the classification focuses on the Xiaopeng accident as an AI Incident.
Thumbnail Image

小鹏P7发生致命车祸!辅助驾驶缘何频频出事

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based assisted driving system (XPilot 3.0) with multiple sensors and AI perception algorithms. The system failed to detect a stationary vehicle and did not provide warnings or automatic braking, directly leading to a fatal collision. This is a clear case where the AI system's malfunction caused injury and death, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI system's failure in perception and emergency braking functions.
Thumbnail Image

小米进军电动汽车500天 雷军公布最新进展

2022-08-11
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes Xiaomi's ongoing development and testing of autonomous driving AI systems, which qualifies as an AI system. However, there is no indication of any harm, malfunction, or misuse resulting from this AI system at this stage. The information is primarily an update on progress and future plans, without any mention of realized or potential harm. Therefore, this is best classified as Complementary Information, providing context and updates on AI development without reporting an AI Incident or AI Hazard.
Thumbnail Image

雷军晒小米汽车自动驾驶展示视频

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving technology) in active development and demonstration. However, the article does not report any harm or incident resulting from the AI system's use or malfunction. It is a progress update and demonstration of technology capabilities without mention of accidents, injuries, rights violations, or other harms. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system development and deployment progress.
Thumbnail Image

特斯拉突然加速冲撞小区护栏 车主自述:怕啥来啥

2022-08-12
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The incident involves a Tesla Model 3, which incorporates AI systems for driving assistance and control. The sudden unintended acceleration and loss of control, even without autopilot engaged, suggests a malfunction or failure in the AI or software system managing the vehicle's acceleration. This malfunction directly caused property damage and posed a risk of injury, fulfilling the criteria for an AI Incident due to harm caused by AI system malfunction in use.
Thumbnail Image

护航“无人驾驶”驶向未来

2022-08-11
南方网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI, as it discusses fully driverless vehicles and their commercial operation. However, it does not describe any event where the AI system's development, use, or malfunction has directly or indirectly caused harm (such as injury, rights violations, or property damage). Nor does it indicate any plausible imminent risk of harm. The focus is on policy support, technological progress, and positive outlook, which fits the definition of Complementary Information. Therefore, the event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

查看详情

2022-08-10
news.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the development, deployment, and regulatory framework of autonomous driving technologies, which involve AI systems. However, it does not describe any realized harm or incident caused by these AI systems, nor does it highlight any credible risk or hazard that could plausibly lead to harm. Instead, it provides context on the evolving AI ecosystem in autonomous driving, including company activities and government guidelines. Therefore, it fits best as Complementary Information, providing background and updates without reporting an AI Incident or AI Hazard.
Thumbnail Image

重庆武汉两地率先开跑 自动驾驶全无人商业运营试水

2022-08-10
news.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving AI at L4 level) in commercial operations without safety drivers, which is a significant step in AI deployment. Although no harm or accident is reported, the nature of fully driverless autonomous vehicles operating on public roads inherently carries plausible risks of injury, property damage, or disruption. The article focuses on the start of such operations and the regulatory environment, indicating a credible potential for future harm. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident, but no actual harm has yet occurred.
Thumbnail Image

看视频、收发信息……北京持续加大对“分心驾驶”处罚力度

2022-08-11
The Paper
Why's our monitor labelling this an incident or hazard?
The upgraded monitoring devices use AI algorithms to detect distracted driving behaviors, which are a direct cause of traffic accidents and injuries, including fatalities. The AI system's role in capturing and enforcing penalties for these behaviors means it is involved in the use phase leading to harm prevention and enforcement. Since the article reports actual harm caused by distracted driving and the AI system's role in identifying such behaviors, this qualifies as an AI Incident due to direct involvement of AI in addressing harms to health and safety of people.
Thumbnail Image

【社论】安全是自动驾驶的前提

2022-08-12
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly references an accident involving a vehicle operating in assisted driving mode, which implies the involvement of an AI system (assisted driving/NGP). The accident resulted in a fatality, which constitutes harm to a person. Therefore, this is an AI Incident because the use of an AI system (assisted driving) directly or indirectly led to harm. However, the article mainly focuses on the broader safety, regulatory, and responsibility issues rather than detailed new incident specifics or new hazards. It discusses the incident as a case study to highlight the challenges of AI system safety and legal responsibility in autonomous driving. Given the fatal accident linked to AI-assisted driving, the classification as an AI Incident is appropriate.
Thumbnail Image

雷军首次揭晓小米自动驾驶技术:采用全栈自研算法, 目标2024年进入第一阵营

2022-08-11
Techweb
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology with AI algorithms) in its development and testing phase. However, there is no indication of any injury, rights violation, disruption, or other harm caused or plausibly caused by the AI system. The article is primarily an update on the progress and strategic plans of Xiaomi's AI autonomous driving technology, which fits the definition of Complementary Information as it provides context and development updates without reporting any incident or hazard.
Thumbnail Image

小鹏汽车回应"宁波辅助驾驶事故":全力配合事故调查

2022-08-11
Techweb
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the driving assistance system of the Xiaopeng P7, which is an AI system that influences vehicle control decisions. The accident caused injury and death, which constitutes harm to persons. The AI system's malfunction or failure to warn is directly linked to the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death.
Thumbnail Image

造车新势力们,别在智能驾驶上吹牛了

2022-08-12
Techweb
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions intelligent driving assistance systems (LCC, NOP, AEB) which are AI systems designed to assist vehicle control. Multiple accidents with fatalities have occurred while these systems were active, indicating direct or indirect causation of harm. The systems failed to provide adequate warnings or intervention, leading to deaths and injuries, which qualifies as harm to persons. The article also discusses the overpromising marketing that may have contributed to misuse or overreliance, further supporting indirect causation. Although investigations are ongoing, the harm has already occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

雷军:小米汽车100%自研自动驾驶!先砸它33个亿

2022-08-12
Techweb
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's development and deployment of an AI system for autonomous driving in their vehicles, with ongoing testing on public roads. However, there is no mention of any harm, malfunction, or incident caused by the AI system. The announcement focuses on progress and investment without any indication of realized or potential harm. Therefore, this is not an AI Incident or AI Hazard. It is a general update on AI system development and deployment, which fits the definition of Complementary Information as it provides context and progress details about AI in the automotive sector without reporting harm or risk.
Thumbnail Image

小鹏汽车撞人致死,谁之过?

2022-08-11
huxiu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaopeng XPILOT 2.5 driver assistance system, which includes AI components such as camera and radar-based perception and control algorithms. The system was active during the accident and failed to prevent the collision, partly due to technological limitations and the driver's distraction. The harm (death and injury) has occurred, and the AI system's malfunction or insufficient capability is a contributing factor. The article also discusses the broader context of AI-assisted driving risks and regulatory challenges, reinforcing the classification as an AI Incident rather than a hazard or complementary information. The presence of realized harm caused or contributed to by the AI system's use meets the criteria for an AI Incident.
Thumbnail Image

2022-08-12
雪球
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI incident or harm caused by AI systems, nor does it report a plausible future harm event. It mainly provides an analysis and clarification of the current technological and regulatory landscape of autonomous driving in China. There is no direct or indirect harm described, nor a hazard event. Therefore, it is best classified as Complementary Information, as it provides context and understanding about AI system deployment and policy environment without reporting an incident or hazard.
Thumbnail Image

仍在调查之中 小鹏汽车回应宁波辅助驾驶事故

2022-08-11
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (LCC lane centering assistance) that was active at the time of the accident. The system's failure to detect a stationary vehicle and warn the driver, combined with the driver's distraction, led to a fatal collision. This constitutes indirect harm caused by the AI system's malfunction or limitations. Therefore, this qualifies as an AI Incident due to injury and harm to a person caused directly or indirectly by the AI system's use.
Thumbnail Image

特斯拉突然加速,车主:怕啥来啥

2022-08-12
雪球
Why's our monitor labelling this an incident or hazard?
The Tesla vehicle likely uses AI systems for driving assistance and control. The sudden unintended acceleration causing a crash is a malfunction of the AI system or its integration with vehicle controls. This malfunction directly led to property damage and potential harm to people, fulfilling the criteria for an AI Incident. The driver was not using autopilot, indicating the issue may be related to AI components involved in vehicle control beyond autonomous driving features. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction.
Thumbnail Image

膝盖中箭囧水柜: 2022年8月10日下午,王翀凯(男,32岁,宁波江北区)驾驶浙BD11503号小型轿车(驾驶员供述,案发时启用了辅助驾驶功能)沿奉化区方桥街道机场高...

2022-08-11
i.jandan.net
Why's our monitor labelling this an incident or hazard?
The assisted driving feature qualifies as an AI system because it provides automated driving assistance. The accident caused injury (death) and property damage, which are harms defined under AI Incident criteria. The AI system's use was a contributing factor, even if driver fatigue and illegal parking also played roles. Therefore, this event meets the definition of an AI Incident due to the direct involvement of an AI system in causing harm.
Thumbnail Image

福佑卡车陈冠岭:将面向自动驾驶公司开源商业运营场景

2022-08-12
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving technology for freight trucks, which fits the definition of an AI system. However, the content focuses on the promotion, development, and future commercial deployment of these AI systems without describing any direct or indirect harm caused by their use or malfunction. There is no mention of accidents, injuries, rights violations, or other harms occurring due to AI. Nor does it present a credible imminent risk or hazard from the AI systems described. The main narrative is about technological progress, industry collaboration, and strategic plans, which aligns with providing complementary information about AI ecosystem developments rather than reporting an incident or hazard. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

木仓科技开发的驾考宝典AI教练:让驾驶行为更加规范

2022-08-10
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as an AI-powered driver monitoring and coaching tool that detects unsafe driving behaviors and intervenes to prevent accidents. The article focuses on the system's capabilities and upgrades to enhance safety, with no mention of any malfunction, misuse, or harm caused by the AI. Since the system is intended to reduce harm and no harm has occurred or is implied, this does not qualify as an AI Incident or AI Hazard. It is not merely general AI news but provides detailed information about an AI system's deployment and its safety role, which is complementary information about AI's positive impact and ongoing development in the ecosystem.
Thumbnail Image

鼓励自动驾驶汽车在部分场景从事运营

2022-08-11
auto.cri.cn
Why's our monitor labelling this an incident or hazard?
The article centers on a policy guideline encouraging and regulating the use of autonomous driving vehicles in specific scenarios, emphasizing safety and prudence. It does not report any actual harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. The content is about future-oriented regulatory and technical guidance, representing a governance and industry development update rather than an incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context and policy response related to AI systems without describing a specific AI Incident or AI Hazard.
Thumbnail Image

小鹏p7高架撞人致死冲上热搜!疑似车主发声:千万别相信辅助驾驶了

2022-08-11
和讯网
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's assisted driving system is an AI system designed to aid driving tasks such as lane keeping and collision warning. The accident involved the AI system failing to detect a hazard (a person near a disabled vehicle) and prevent a collision, leading directly to a fatality. The event involves the use and malfunction of an AI system causing injury and death, which fits the definition of an AI Incident. The investigation and statements from the vehicle owner and company confirm the AI system's role in the incident.
Thumbnail Image

小鹏汽车回应p7高架桥段撞人致死,未回应车主是否开启ngp辅助驾驶

2022-08-11
和讯网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving a vehicle equipped with AI-assisted driving features. The AI system's failure to recognize the hazard and prevent the collision directly led to a person's death, fulfilling the criteria for harm to a person caused by AI system use or malfunction. The presence of the AI system is explicit (LCC and possibly NGP), and the harm is realized. Although the company has not confirmed whether NGP was active, the assisted driving system was engaged, and the accident occurred under its operation. Hence, this is an AI Incident.
Thumbnail Image

鼓励自动驾驶汽车在部分场景从事运营

2022-08-10
和讯网
Why's our monitor labelling this an incident or hazard?
The article centers on a policy guideline encouraging and regulating the use of autonomous driving vehicles in specific scenarios. It involves AI systems (autonomous driving technology) and their intended use but does not describe any realized harm or incident resulting from AI system malfunction or misuse. Instead, it highlights safety precautions, regulatory frameworks, and the need for further technical validation and standards development. Therefore, it is best classified as Complementary Information, as it provides context and governance-related updates about AI systems without reporting an AI Incident or AI Hazard.
Thumbnail Image

萝卜快跑旗下自动驾驶汽车被追尾,百度作出回应

2022-08-12
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (an autonomous driving car) and a traffic accident causing harm to property (vehicle damage). Although no injuries occurred, the damage to the vehicle is a form of harm to property. The AI system was in use at the time, and the accident was caused by a human driver hitting the autonomous vehicle from behind. This is a direct event involving an AI system leading to harm, meeting the criteria for an AI Incident. It is not merely a potential hazard or complementary information, as the harm has occurred.
Thumbnail Image

交通运输部拟鼓励自动驾驶汽车从事出租客运

2022-08-09
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous driving vehicles) and their use in transportation. However, it does not describe any harm or incident resulting from these AI systems, nor does it indicate a plausible imminent harm. Instead, it reports on regulatory efforts, safety guidelines, pilot projects, and commercial deployments aimed at safely integrating autonomous vehicles into public transport. This fits the definition of Complementary Information, as it enhances understanding of AI deployment and governance without reporting a new incident or hazard.
Thumbnail Image

小鹏P7高架撞上故障车,辅助驾驶功能“背锅”?这些情况都不能识别,需要车主立即接管车辆

2022-08-11
和讯网
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driver assistance system is an AI system designed to assist driving by detecting lane lines and controlling the vehicle accordingly. The accident occurred because the system did not recognize a stationary broken-down vehicle and a person behind it, leading to a fatal collision. This is a direct consequence of the AI system's malfunction or limitation in recognizing static obstacles, which is explicitly mentioned. The harm (death and injury) has occurred and is directly linked to the AI system's failure to alert or control the vehicle to avoid the collision. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to injury and death.
Thumbnail Image

国内无人驾驶,一只脚已经踏入未来了?

2022-08-10
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in autonomous vehicles being used commercially without safety drivers, which fits the definition of an AI system. The event involves the use and deployment of these AI systems in real-world conditions. Although no harm or incidents are reported, the commercial operation of fully driverless vehicles on public roads inherently carries plausible risks of harm (e.g., accidents, injury). Thus, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no harm has yet occurred, nor is it merely Complementary Information or Unrelated, as the article focuses on the deployment and regulatory approval of AI systems with potential safety implications.
Thumbnail Image

早报|雷军首次揭晓小米自动驾驶/iPhone 14 或提前发布/小鹏回应辅助驾驶事故

2022-08-12
爱范儿
Why's our monitor labelling this an incident or hazard?
The Xiaomi autonomous driving system and Xiaopeng's XPilot system both involve AI technologies for autonomous or assisted driving. The fatal collision involving the Xiaopeng P7 with LCC active directly caused injury and death, fulfilling the harm criteria for an AI Incident. The AI system's failure to detect or warn about the stationary vehicle and prevent the collision is a malfunction or limitation leading to harm. The article explicitly links the AI system's use to the accident and resulting fatality, meeting the definition of an AI Incident. Other parts of the article about product launches, research, or unrelated topics do not affect this classification.
Thumbnail Image

提前告诉你,2024 年小米汽车有这些可能性

2022-08-12
爱范儿
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's ongoing development and testing of AI systems for autonomous driving, describing technical progress and future plans without mentioning any realized harm, malfunction, or misuse of the AI systems. There is no indication of injury, rights violations, property damage, or other harms caused or plausibly caused by the AI. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides contextual information about AI development and corporate strategy, fitting the definition of Complementary Information.
Thumbnail Image

小米造车新进展:首批140辆自动驾驶测试车,后年进入第一阵营

2022-08-12
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's autonomous driving technology, which qualifies as an AI system due to its autonomous vehicle capabilities. However, the content focuses on the development progress and testing phase without any reported harm or accident linked to the AI system. Since no harm has occurred and the event only shows potential future deployment, it fits the definition of an AI Hazard, as the autonomous driving system could plausibly lead to harm in the future once deployed widely. There is no indication of complementary information or unrelated content as the main focus is on the AI system's development and testing progress with potential future risk.
Thumbnail Image

小鹏汽车高架上撞飞男子,交警:已传唤肇事者

2022-08-11
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event describes a traffic accident where an AI system (the LCC assisted driving feature) was activated and may have contributed to the collision that caused injury or death. The AI system is explicitly mentioned as being in use, and the harm (person hit and injured/killed) has occurred. Although the investigation is ongoing and the exact cause is not confirmed, the AI system's involvement in the incident is direct and pivotal. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

又一起悲剧!小鹏汽车这一撞,自动驾驶的泡沫也该破了

2022-08-11
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the LCC assisted driving feature) whose malfunction or limitations contributed to a fatal accident, causing direct harm to a person. The AI system's failure to detect a hazard and the driver's overreliance on the system are central to the incident. This meets the definition of an AI Incident as the AI system's use and malfunction directly led to injury and death. The article also references prior incidents and regulatory actions, but the primary focus is the fatal accident caused by the AI system's failure and misuse, not just complementary information or potential hazards.
Thumbnail Image

"萝卜快跑"无人车出车祸?百度回应:正常测试时被后车追尾

2022-08-12
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the autonomous driving system of the 'Luobo Kuaipao' vehicle) actively in use during the accident. The harm (vehicle damage) is direct and material, resulting from the AI system's operation in a real traffic scenario. Although the collision was caused by a rear vehicle, the autonomous vehicle's involvement and the fact it was operating without a safety driver make this an AI Incident. The event meets the criteria of harm to property caused directly or indirectly by the AI system's use.
Thumbnail Image

小鹏汽车撞人致死,谁之过?

2022-08-11
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the Xiaopeng P7's lane keeping assist system (XPILOT 2.5), which uses AI-based perception and control to assist driving. The accident caused death and injury, fulfilling the harm criteria. The AI system's limitations in perception and driver monitoring, combined with the driver's distraction, directly led to the fatal collision. The article also references similar AI-assisted driving accidents, reinforcing the link between AI system use/malfunction and harm. Thus, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

小鹏P7辅助驾驶状态下撞人致死 谁该担责?

2022-08-12
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the LCC assisted driving system) that was active during a fatal accident. The AI system's failure to detect a stationary obstacle and to alert or slow the vehicle contributed indirectly to the death of a person. The harm (fatal injury) has occurred, and the AI system's malfunction or limitation is a contributing factor. This fits the definition of an AI Incident, as the AI system's use directly or indirectly led to harm to a person. The event is not merely a potential risk or a complementary update but a realized harm involving AI.
Thumbnail Image

小鹏P7高架撞上故障车,辅助驾驶功能“背锅”?

2022-08-11
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driver assistance system (XPILOT with LCC and ACC) is an AI system designed to assist driving by detecting lane lines and controlling the vehicle's steering and speed. The accident occurred because the AI system did not recognize a stationary vehicle and a person on the road, failing to alert the driver or reduce speed, which directly led to a fatal collision. The involvement of the AI system in the development, use, and malfunction phases is clear. The harm (death and injury) is directly linked to the AI system's failure, meeting the criteria for an AI Incident. The article also discusses the system's known limitations and prior similar incidents, reinforcing the classification.
Thumbnail Image

每经热评|自动驾驶商业化落地仍有功课要做

2022-08-10
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology. It addresses the use and development of these AI systems and the associated risks and regulatory challenges. However, it does not report any actual harm or incidents caused by AI systems, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it focuses on the potential risks, regulatory gaps, and the need for further work before full commercialization. Therefore, the event is best classified as an AI Hazard, as the use and deployment of autonomous driving AI systems could plausibly lead to harm if not properly managed, but no specific incident of harm is reported.
Thumbnail Image

造车500天后,小米首度公布自动驾驶技术进展

2022-08-12
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's development and testing of autonomous driving AI technology, which qualifies as an AI system. However, it does not report any realized harm or incidents resulting from the use or malfunction of this AI system. Nor does it present a credible or imminent risk of harm that could plausibly lead to an AI Incident. Instead, it provides an update on the progress and plans of Xiaomi's AI autonomous driving efforts, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem and ongoing developments without describing an incident or hazard.
Thumbnail Image

雷军谈“造车”目标:到2024年进入行业第一阵营 第一期规划140辆自动驾驶测试车 2022-08-11 21:35

2022-08-11
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article describes Xiaomi's development and testing of autonomous driving technology, which involves AI systems for vehicle navigation and control. However, there is no mention of any harm, malfunction, or risk that has materialized or is imminent. The information is about progress and plans, without indication of incidents or hazards. Therefore, this is complementary information about AI development and deployment.
Thumbnail Image

21:21 小米造车新进展:首批140辆自动驾驶测试车,后年进入第一阵营

2022-08-11
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article describes Xiaomi's development and testing of autonomous driving AI systems, which are AI systems by definition. However, there is no mention of any harm or incident caused by these systems yet. The event is about ongoing development and testing, with potential future risks inherent in autonomous driving technology, but no realized harm is reported. Therefore, this qualifies as an AI Hazard because the autonomous vehicles could plausibly lead to incidents in the future, but no incident has occurred yet.
Thumbnail Image

不设方向盘、刹车、油门踏板,Robobus市场规模或达千亿级

2022-08-10
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article describes AI systems (L4 autonomous driving technology) and their intended use (Robobus replacing human drivers). However, it does not report any realized harm or incidents caused by these AI systems. Instead, it discusses future commercialization, technological development, and market potential, which aligns with providing context and updates about AI ecosystem developments. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

重庆、武汉两地率先开展无人驾驶商业试运营点

2022-08-09
财经网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving vehicles) and their use in commercial pilot operations, but there is no indication of any realized harm or incident caused by these AI systems. The article primarily discusses regulatory and policy frameworks enabling the deployment of autonomous vehicles, as well as industry developments and future plans. This fits the definition of Complementary Information, as it provides context and updates on AI system deployment and governance without reporting any AI Incident or AI Hazard.
Thumbnail Image

小鹏汽车回应宁波高架撞人致死事故:全力配合调查,持续跟进

2022-08-11
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (assisted driving features such as LCC and possibly NGP) whose failure to detect a hazard on the road indirectly led to a fatal accident. The AI system's malfunction or limitation in perception contributed to the harm. This fits the definition of an AI Incident because the AI system's use directly or indirectly led to injury or harm to a person. Although the driver was reportedly distracted, the AI system's failure to recognize the hazard is a contributing factor. Therefore, this is classified as an AI Incident.
Thumbnail Image

中国青年报:谁来为自动驾驶把稳“方向盘”

2022-08-11
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of L4 autonomous driving technology used in commercial driverless vehicles. It describes the use and deployment of these AI systems on public roads without safety drivers, which inherently carries risks of accidents or safety incidents. Although no actual harm or accident is reported, the article discusses the potential for such harms and the regulatory measures being introduced to mitigate these risks. This fits the definition of an AI Hazard, where the development and use of AI systems could plausibly lead to harm. It is not an AI Incident because no realized harm or accident is described. It is not Complementary Information because the article's main focus is on the deployment and regulatory environment of autonomous driving, highlighting potential risks rather than just updates or responses to past incidents. It is not Unrelated because the event clearly involves AI systems and their societal implications.
Thumbnail Image

小米自动驾驶技术首次揭晓:首期33亿研发投入

2022-08-12
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system, specifically autonomous driving technology with AI algorithms for perception, decision-making, and control. The disclosed testing and development progress indicate active use and advancement of AI systems. However, there is no indication of any realized harm, malfunction, or legal or ethical violations resulting from the AI system's use. There is also no explicit or implicit suggestion of plausible future harm or hazards. Therefore, the event does not qualify as an AI Incident or AI Hazard. Instead, it is a complementary information update about the AI ecosystem and technological progress in autonomous driving.
Thumbnail Image

小鹏汽车追尾致人伤亡,交警回应:暂不能确认是辅助驾驶功能造成 | 潇湘晨报网

2022-08-11
xxcb.cn
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the LCC driver assistance feature) that could have contributed to the harm (injury and death) but there is no confirmation yet that the AI system caused or contributed to the accident. Since harm has occurred, and the AI system's involvement is plausible but unconfirmed, this event is best classified as an AI Incident with ongoing investigation. The presence of injury and death linked to a vehicle with AI-assisted driving features meets the criteria for an AI Incident, even if causation is not yet established, because the AI system's use is directly related to the event and harm.
Thumbnail Image

小鹏汽车回应撞人致死事故:全力配合调查

2022-08-11
早报
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (assisted driving features) whose malfunction or failure to detect hazards directly led to a fatal accident, causing harm to a person. This fits the definition of an AI Incident as the AI system's use and malfunction directly caused injury and death. The involvement of the AI system is clear, and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

小鹏P7宁波致命车祸:智能辅助驾驶为何没躲开静止车辆?

2022-08-12
杭州网
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's intelligent driver assistance system (XPILOT 2.5) is an AI system that uses sensors and algorithms to assist driving. The accident occurred while the system was active, and it failed to detect a stationary vehicle and pedestrian, causing a fatal collision. The failure to recognize static obstacles due to sensor and algorithmic limitations is a malfunction of the AI system. This malfunction directly led to injury and death, fulfilling the criteria for an AI Incident under the OECD framework. The event is not merely a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's use and malfunction.
Thumbnail Image

辅助驾驶状态下“高架撞人致死”,小鹏汽车回应

2022-08-11
华商网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the LCC driver assistance feature) actively engaged during the accident. The system failed to recognize a hazard and did not provide a warning, which, along with driver distraction, led to a fatal collision. This meets the criteria for an AI Incident because the AI system's malfunction indirectly caused harm to a person (death). The description clearly links the AI system's use and its limitations to the incident, fulfilling the definition of an AI Incident.
Thumbnail Image

对于目前各地纷纷投入巨大精力开展自动驾驶商业化测试,顾大松认为,现阶段的任务还应该聚焦于“技术验证”,不要急于探讨商业应用。 [全文]

2022-08-10
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on policy guidance and expert commentary regarding autonomous driving technology development and commercialization. It does not describe any realized harm or incident caused by AI systems, nor does it present a credible imminent risk of harm. The focus is on safety guidelines, regulatory frameworks, and the need for technical validation before commercial use. Therefore, it qualifies as Complementary Information, providing important context and updates on AI governance and development without reporting an AI Incident or AI Hazard.
Thumbnail Image

小鹏汽车回应疑似辅助驾驶事故:全力配合调查,协助处理后续事宜

2022-08-11
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes a collision involving a vehicle with a suspected active assisted driving feature, which is an AI system that influences vehicle control. The accident caused injury and death, fulfilling the harm criteria. The AI system's use is directly linked to the incident, making it an AI Incident. The company's response is complementary but does not change the classification of the event itself.
Thumbnail Image

百度地图 X Apollo“自动驾驶级导航”体验上线北京亦庄

2022-08-12
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Baidu Maps integrated with Apollo autonomous driving AI) used in navigation and traffic management. The article focuses on the rollout and benefits of this AI-enabled navigation system, emphasizing improved safety, efficiency, and user experience. There is no mention or implication of any injury, rights violation, infrastructure disruption, or other harms caused or plausibly caused by the AI system. The event does not describe any incident or hazard but rather an advancement and deployment of AI technology with positive outcomes. Hence, it fits the definition of Complementary Information, as it provides supporting data and context about AI system use and societal impact without reporting harm or risk.
Thumbnail Image

百度萝卜快跑回应测试车事故:正常测试过程中被后方车辆追尾

2022-08-12
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system, specifically an autonomous test vehicle operated by Baidu's Luobo (萝卜) service. The accident occurred during the use of this AI system (autonomous driving) and resulted in a traffic collision, which is a harm to persons or property (even though no injuries occurred, the collision itself is a harm event). The AI system was not at fault; the rear vehicle's distracted driving caused the collision. However, since the AI system was involved in the event and the accident occurred during its operation, this qualifies as an AI Incident due to the direct involvement of the AI system in a harmful event (traffic accident).
Thumbnail Image

突发!小鹏P7高架撞人致死,最新回应!"顶流" 重仓股揭晓!葛兰、谢治宇、朱少醒增仓这些股

2022-08-11
金融界网
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's LCC system is an AI-based driver assistance system that failed to recognize a stationary vehicle and pedestrian, leading to a fatal collision. The system's inability to detect certain static obstacles is explicitly mentioned, and the driver was using the system at the time. The harm (death) is directly linked to the AI system's malfunction or limitations. This meets the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death (harm to a person). The article also discusses prior similar incidents and regulatory scrutiny of similar AI driving systems, reinforcing the classification.
Thumbnail Image

百度"萝卜快跑"无人出租车出车祸,回应:被后车追尾,无人员伤亡

2022-08-12
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous taxi) that was operating autonomously when it was involved in a traffic accident. The accident caused damage to the vehicle (harm to property) but no injuries. The AI system's use is directly linked to the incident since the vehicle was in autonomous mode. Although the accident was caused by another vehicle, the AI system's involvement in the event and the resulting property damage meet the criteria for an AI Incident. There is no indication that this is merely a potential hazard or complementary information, and the harm is realized, not just plausible.
Thumbnail Image

自动驾驶公司一定要造车吗?

2022-08-11
金融界网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI for heavy trucks. However, it does not describe any realized harm, injury, rights violation, or disruption caused by these AI systems. Instead, it focuses on the development progress, technical challenges, and potential benefits of AI-enabled autonomous trucks. There is no indication of an AI incident or hazard event occurring or imminent. The article serves as complementary information about AI system development and industry trends, rather than reporting an incident or hazard.
Thumbnail Image

小米召开秋季发布会:首款人形仿生机器人CyberOne亮相 首次披露小米自动驾驶技术最新进展

2022-08-12
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of autonomous driving technology and a humanoid robot with AI capabilities. However, it does not describe any harm or malfunction caused by these systems. The content is mainly about product announcements, technological progress, and future plans. Since no harm has occurred and no credible immediate risk is described, this does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about AI developments and corporate strategy, fitting the definition of Complementary Information.
Thumbnail Image

小米造车500天,业内人士:自动驾驶技术没有惊喜

2022-08-12
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's autonomous driving technology) in development and testing phases. The article reports on the system's current capabilities, limitations, and industry opinions but does not describe any realized harm or incidents caused by the AI system. There is no indication that the AI system has malfunctioned or led to injury, rights violations, property damage, or other harms. While there are concerns about future capabilities and safety, the article does not present a credible or imminent risk of harm that would qualify as an AI Hazard. Instead, it provides complementary information about the state of Xiaomi's AI autonomous driving efforts and industry perspectives.
Thumbnail Image

雷军揭晓小米自动驾驶技术:采用全栈自研算法,目标2024年进入第一阵营

2022-08-11
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an autonomous driving technology with self-developed algorithms. The article details ongoing use and testing but does not report any actual harm or incidents. However, autonomous driving systems have inherent risks that could plausibly lead to injury or other harms if failures occur. Since the article focuses on development progress and testing without harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

雷军:小米造车自动驾驶将采用全栈自研算法

2022-08-11
China Finance Online
Why's our monitor labelling this an incident or hazard?
The content focuses on Xiaomi's development and future deployment of AI-based autonomous driving systems without any indication of harm, malfunction, or misuse. There is no report of injury, rights violations, or other harms caused or plausibly caused by the AI system at this stage. The article is primarily an update on the company's AI development progress and plans, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

小鹏P7撞人致死,疑似辅助驾驶识别出错,小鹏汽车称将配合调查

2022-08-11
金融界网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the LCC assisted driving feature) that failed to detect a hazard, leading to a fatal collision. The AI system's malfunction directly contributed to the harm (death of a person). This fits the definition of an AI Incident because the AI system's malfunction has directly led to injury or harm to a person. The manufacturer's confirmation and ongoing investigation further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

北京首批智能搅拌车投入使用 京东安联保险承保解决重载商用车投保难题

2022-08-12
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (automatic braking and blind spot monitoring) in heavy commercial vehicles, which have been deployed and are actively used on roads. These systems directly contribute to reducing traffic accidents, which are a form of harm to persons and property. Therefore, the AI systems' use has directly led to harm reduction, addressing a significant safety issue. This qualifies as an AI Incident because the AI system's use has a direct impact on preventing harm and managing risks related to traffic accidents involving heavy vehicles.
Thumbnail Image

官宣!自动驾驶正式商用,江铃/庆铃/前晨有望弯道超车!

2022-08-10
163.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and regulatory encouragement of autonomous driving technology for commercial use, which involves AI systems. However, it does not report any actual harm, malfunction, or misuse resulting from these AI systems. Nor does it describe any credible or imminent risk of harm. Instead, it focuses on the regulatory framework, technological advancements, and industry perspectives. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not a routine product launch or feature update but provides important contextual information about AI deployment and governance. Hence, it fits best as Complementary Information, enhancing understanding of the AI ecosystem and its governance without reporting new harm or risk.
Thumbnail Image

百度"萝卜快跑"无人出租车出车祸,回应:被后车追尾,无人员伤亡

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an autonomous driving system used in Baidu's 'Luobo Kuaipao' unmanned taxi service. The incident was caused by a rear vehicle hitting the autonomous taxi, leading to property damage but no injuries. Since the AI system was in use and involved in the accident, and there was realized harm to property, this qualifies as an AI Incident. The harm is direct property damage caused during the use of the AI system. There is no indication that the AI malfunctioned or caused the accident, but the AI system's involvement in the event and the resulting harm to property meet the criteria for an AI Incident.
Thumbnail Image

小鹏和特斯拉在同一天发生车祸

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
Both Tesla and XPeng vehicles use AI-powered assisted driving systems. The accidents, including a serious injury caused while the driver was using assisted driving, demonstrate direct harm linked to the use of AI systems. The involvement of AI in the vehicles' operation and the resulting physical harm to people meet the criteria for an AI Incident under the framework, as the AI system's use has directly or indirectly led to injury and harm to persons.
Thumbnail Image

小鹏汽车回应"宁波高架致死事故":深感悲痛,将全力配合调查

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (LCC lane centering and ACC adaptive cruise control) that was active during the accident. The system failed to recognize a stationary vehicle and personnel ahead, leading to a fatal collision. The harm (death) is directly linked to the AI system's malfunction or limitation in hazard detection. Although the driver was distracted, the AI system's failure to detect the hazard played a pivotal role. This meets the criteria for an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

交通运输部拟鼓励自动驾驶汽车从事出租客运

2022-08-09
163.com
Why's our monitor labelling this an incident or hazard?
The article centers on the Ministry of Transport's proposed regulatory framework and encouragement for autonomous vehicle deployment in passenger transport. It involves AI systems (autonomous driving) but does not report any actual harm, malfunction, or misuse leading to injury, rights violations, or other harms. It also does not describe a credible risk of imminent harm or hazard from these systems. Instead, it provides complementary information about policy, safety requirements, and ongoing pilot projects, which helps understand the evolving AI ecosystem in transportation. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

小鹏汽车在宁波高架发生撞人致死事故,辅助驾驶又闯祸了?

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—an advanced driving assistance system using visual and radar fusion for perception. The system's failure to detect a stationary vehicle and a person directly led to a fatal accident, constituting injury and harm to a person. The description includes details about the AI system's limitations and the driver's partial distraction, but the AI system's malfunction is a pivotal factor. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction directly caused harm to a person.
Thumbnail Image

交警回应小鹏汽车撞人致死事故:暂不能确认是辅助驾驶功能造成

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (assisted driving) potentially linked to a fatal accident, which constitutes harm. However, since the investigation has not confirmed that the AI system caused or contributed to the harm, the event currently represents a plausible risk rather than a confirmed incident. Therefore, it fits the definition of an AI Hazard, as the AI system's malfunction or use could plausibly lead to harm, but this is not yet established.
Thumbnail Image

小鹏P7车主高架桥上开辅助驾驶功能撞到人!车企称有伤亡

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the LCC driver assistance feature) whose malfunction contributed to a fatal collision. The system failed to recognize a hazard and did not provide a warning, which directly led to injury and death. The harm is realized and significant, involving injury and fatality. The AI system's role is pivotal as it was intended to assist driving but failed to prevent the accident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米首批140辆自动驾驶车陆续测试,雷军:后年进行业第一阵营

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving technology, which qualifies as AI systems under the definitions. However, there is no indication of any harm, malfunction, or misuse that has occurred or is occurring. The testing is ongoing, and the article focuses on development progress and future goals rather than any realized or imminent harm. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on AI system development and testing in the automotive sector without reporting any harm or risk of harm.
Thumbnail Image

百度"萝卜快跑"出车祸?回应:被后方车辆分心驾驶导致追尾事故

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
The Baidu 'Luobo Kuaipao' vehicle is likely an AI system involved in autonomous or assisted driving, given it is a test vehicle from Baidu's autonomous driving service. However, the accident was caused by a human driver behind the vehicle being distracted, not by any malfunction or failure of the AI system itself. There was no harm caused by the AI system's development, use, or malfunction. Therefore, this is not an AI Incident or AI Hazard. It is a factual report about an accident involving an AI system but without AI system fault or plausible future harm from the AI system. Hence, it is classified as Complementary Information, providing context about the AI system's operation and incident response.
Thumbnail Image

萝卜快跑无人出租车又出“车祸” 百度回应被后车追尾

2022-08-12
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an autonomous driving system used in a commercial robotaxi service. The incident is a traffic accident where the autonomous vehicle was rear-ended by another vehicle. Although the AI system was not at fault, the event involves the use of an AI system and resulted in a traffic accident. Since the accident occurred and involved an AI system in operation, this qualifies as an AI Incident due to the direct involvement of the AI system in a harmful event (traffic collision), even though no injuries occurred.
Thumbnail Image

雷军交卷:1亿一个功能!逐帧看小米自动驾驶表现

2022-08-13
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's autonomous driving technology) and its development and use in testing scenarios. However, the article does not report any injury, property damage, rights violations, or other harms caused by the AI system. It mainly presents a demo video and analysis of the system's capabilities and limitations, along with investment and team information. There is no indication that the AI system has directly or indirectly led to harm, nor that it poses an immediate plausible risk of harm. Therefore, this is not an AI Incident or AI Hazard. Instead, it provides complementary information about the AI system's development progress and industry positioning, which fits the definition of Complementary Information.
Thumbnail Image

小鹏P7高架撞上故障车,辅助驾驶功能"背锅"?这些情况都不能识别,需要车主立即接管车辆

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's assisted driving system is an AI system designed to assist with vehicle control. The accident occurred while the system was active and failed to detect a stationary vehicle and pedestrian, leading to a fatal collision. This is a direct harm caused by the malfunction or limitation of the AI system's perception capabilities. The event involves injury and death (harm to persons), which meets the definition of an AI Incident. The article also references prior similar incidents and official investigations, confirming the AI system's role in the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

小鹏智能驾驶致命车祸:高速撞向静止车辆,光线良好,系统失效

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the intelligent driving system was active and failed to detect a stationary road maintenance vehicle, did not alert the driver, and did not apply braking, leading to a fatal collision. The AI system's failure to perform its safety functions directly caused harm to a person, fulfilling the criteria for an AI Incident. The involvement of multiple sensors and the AI perception stack is detailed, confirming the AI system's central role in the incident. The harm is realized and severe (death), and the AI system's malfunction is the pivotal factor.
Thumbnail Image

小米汽车500天做出了哪些成绩?雷军:已超过预期

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology) in development and testing phases. However, there is no indication of any injury, rights violation, disruption, or other harm caused or plausibly caused by the AI system. The article is primarily an update on Xiaomi's AI development progress and capabilities, without reporting any incident or credible risk of harm. Therefore, it qualifies as Complementary Information, providing context and updates on AI system development and testing without describing an AI Incident or AI Hazard.
Thumbnail Image

曝小鹏P7车主高架开辅助驾驶撞人致其身亡!小鹏回应来了......

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system) whose use directly led to a fatal accident, fulfilling the criteria for an AI Incident. The AI system's malfunction or failure to warn contributed to the collision and resulting death, which is a clear harm to a person. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

网友曝小鹏销售带客户体验 G3 智能驾驶,时速 70 怼前车屁股

2022-08-13
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the adaptive cruise control (ACC), which is part of the vehicle's intelligent driving features. The accident occurred because the AI system failed to prevent a rear-end collision at 70 km/h, indicating a malfunction or misuse during its operation. This directly caused harm to property (vehicle damage) and posed a risk to human safety, fulfilling the criteria for an AI Incident. The involvement of the AI system in causing the collision is clear and direct, and the harm is realized (collision and damage).
Thumbnail Image

"自动驾驶"事故频发,技术进步的代价谁来承担?

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems in vehicles (automatic driving and assisted driving features) that malfunctioned or failed to perform as intended, resulting in fatal accidents and injuries. The harms are direct and severe (loss of life, injury). The involvement of AI systems in these incidents is clear, as the accidents occurred while the AI driving assistance was active and failed to detect hazards. Regulatory scrutiny and public criticism further confirm the significance of these harms. Therefore, this qualifies as multiple AI Incidents under the framework, as the AI systems' use directly led to harm to persons.
Thumbnail Image

小鹏车主开辅助驾驶追尾致1死,专家:可能是AEB功能出了问题

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the intelligent driving assistance system with LCC and AEB functions) whose malfunction or failure to act contributed directly to a fatal accident. The harm (death) has occurred, and the AI system's failure to perform as intended is a pivotal factor. Therefore, this qualifies as an AI Incident under the framework, as it involves the use and malfunction of an AI system leading directly to injury or harm to a person.
Thumbnail Image

小鹏接连"翻车",辅助驾驶是否难辞其咎?

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the LCC assisted driving mode) that was active and failed to detect a stationary vehicle, leading to a fatal collision. This is a direct malfunction of an AI system causing injury and death, which fits the definition of an AI Incident. The article discusses the AI system's limitations and the accident's circumstances, confirming the AI system's role in the harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

自动驾驶步入商业化落地前夜:相关立法正在路上

2022-08-13
网易车讯
Why's our monitor labelling this an incident or hazard?
The article centers on legislative and policy progress regarding autonomous driving technology, which is an AI system. However, it does not describe any realized harm or incident caused by AI, nor does it report a near miss or plausible future harm event. Instead, it provides complementary information about the evolving legal and regulatory landscape, public acceptance concerns, and the challenges of assigning liability in accidents involving autonomous vehicles. Therefore, it fits the definition of Complementary Information as it enhances understanding of AI ecosystem developments and governance responses without describing a new AI Incident or AI Hazard.
Thumbnail Image

提前告诉你,2024 年小米汽车有这些可能性

2022-08-13
163.com
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's ongoing development and testing of AI systems for autonomous driving and related technologies. While it involves AI systems and their use, there is no indication of any harm, malfunction, or violation resulting from these AI systems. The discussion is about progress, potential, and challenges, without any direct or indirect harm occurring or plausible harm imminently expected. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides contextual and developmental information about AI in automotive applications, fitting the definition of Complementary Information.
Thumbnail Image

小鹏回应P7高架开辅助驾驶功能撞人致死:第一原则是安抚好事主,做好调查配合工作

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the LCC driving assistance feature) whose use and failure to prevent the collision directly caused a fatality. The system's malfunction or insufficient alerting, combined with driver distraction, led to the death of a person, which is a clear harm to health. Therefore, this qualifies as an AI Incident under the definition of an event where the use or malfunction of an AI system has directly led to injury or harm to a person.
Thumbnail Image

小鹏P7高架撞上故障车,辅助驾驶功能"背锅"?

2022-08-13
网易车讯
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driver assistance system is an AI system designed to assist driving by detecting lane lines and controlling the vehicle's position. The system's failure to recognize a stationary vehicle and a person on the road, combined with the driver's distraction, led to a fatal collision. This constitutes direct harm to a person caused by the AI system's malfunction or limitation. The article provides sufficient evidence of realized harm (death and injury) linked to the AI system's use and failure, meeting the criteria for an AI Incident. The discussion of regulatory actions and warnings further supports the significance of the incident.
Thumbnail Image

小鹏P7车主开辅助驾驶追尾致人死亡,小鹏回应:全力配合调查

2022-08-11
163.com
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves an AI system (the intelligent driving assistance feature) whose malfunction directly caused a fatal collision. The vehicle did not recognize the stationary hazard and did not decelerate, leading to the death of a person. This meets the criteria for an AI Incident because the AI system's malfunction directly led to injury and death, which is harm to a person. The manufacturer's response and ongoing investigation do not change the classification of the event as an AI Incident.
Thumbnail Image

重庆、武汉无人出租车新政出台,开放程度比肩美国加州

2022-08-09
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving vehicles) and their commercial deployment, but no harm or malfunction is reported. The article describes regulatory approvals and technological capabilities, which could plausibly lead to future AI incidents if problems arise, but currently no harm has occurred. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on AI system deployment and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

官宣造车500天后,小米汽车"首次亮相"

2022-08-12
163.com
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's progress and ambitions in AI-driven autonomous vehicles and robotics, describing features and testing plans without any mention of harm, malfunction, or risks materializing. There is no indication of direct or indirect harm caused by the AI system, nor credible warnings of plausible future harm. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information as it provides contextual updates on AI development and deployment plans.
Thumbnail Image

硬科技投向标|政策鼓励机器人B端应用 宁德时代拟投73.4亿欧元在匈牙利建厂

2022-08-13
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article mainly provides updates on policies encouraging AI and robotics applications, investment rounds in AI-related companies, and industrial projects. There is no mention of any incident or hazard involving AI systems causing or potentially causing harm. The information serves to contextualize the AI ecosystem and ongoing developments rather than reporting on an AI Incident or AI Hazard. Therefore, it fits the category of Complementary Information as it enhances understanding of the AI landscape without describing a specific harm or risk event.
Thumbnail Image

DMS,监控了谁?

2022-08-11
China Finance Online
Why's our monitor labelling this an incident or hazard?
The DMS is an AI system monitoring driver behavior to prevent fatigue-related accidents. The article reports real cases where the AI system misclassifies driver states, causing false alarms and user dissatisfaction. These misclassifications represent malfunctions of the AI system that have directly led to harm in terms of user trust and potential safety risks (e.g., drivers turning off safety features). Therefore, this event meets the criteria for an AI Incident due to the AI system's malfunction causing realized harm, even if physical injury is not reported. The article also discusses responses and improvements, but the primary focus is on the incident of misclassification and its consequences.
Thumbnail Image

小鹏回应P7辅助驾驶致死事故

2022-08-12
China Finance Online
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's driver assistance system is an AI system that uses sensor fusion and autonomous decision-making to assist driving. The failure to warn and prevent collision while the system was active directly caused a fatal accident, meeting the criteria for an AI Incident due to injury and death caused by AI system malfunction during use. The event is not merely a potential hazard or complementary information but a realized harm linked to AI system malfunction.
Thumbnail Image

早盘内参:首个国产新冠口服药每瓶270元 已运抵河南新疆海南等地

2022-08-12
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving function in the Xpeng P7 car) whose use directly led to a collision causing injury. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person. The article describes the harm as having occurred and the ongoing investigation, confirming realized harm rather than potential harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

危险!小鹏P7高架撞人致死,到底谁该为这场交通事故负责?自动驾驶到底是不是真的“自动”?

2022-08-11
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the assisted driving system with LCC and ACC) whose malfunction or limitation in perception directly led to a fatal accident (harm to a person). The AI system failed to detect a stationary vehicle and a person, causing a collision and death. The article also references previous similar incidents and regulatory context, but the core event is a realized harm caused by the AI system's use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

自动驾驶全无人商业运营试水 重庆武汉两地率先开跑

2022-08-09
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving AI) being used in commercial operations without safety drivers, which is a significant development in AI deployment. However, the article does not describe any realized harm, injury, rights violations, or disruptions caused by these AI systems. It mainly discusses the initiation of these services, regulatory support, and technological advancements. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about AI ecosystem developments, policy, and deployment progress, which fits the definition of Complementary Information.
Thumbnail Image

Apollo全线赋能百度地图,北京市高级别自动驾驶示范区首发全新版本

2022-08-12
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving and intelligent navigation technologies) actively used in real-world settings, improving traffic management and user experience. However, there is no indication of any injury, rights violation, disruption, or other harm caused or plausibly caused by these AI systems. The article focuses on the benefits and deployment progress rather than any negative outcomes or risks. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not merely general AI news but provides complementary information about AI deployment and its societal impact, fitting the definition of Complementary Information.
Thumbnail Image

无人驾驶试点也要守住安全底线

2022-08-11
中国经济网
Why's our monitor labelling this an incident or hazard?
The article centers on the development and regulatory environment of autonomous driving systems, which are AI systems, and the potential safety risks associated with their deployment. However, it does not describe any realized harm or accident caused by these AI systems, nor does it report a near-miss or credible imminent risk event. The focus is on the cautious progression and safety considerations in pilot programs and regulatory guidelines. Therefore, it is best classified as Complementary Information, as it provides context and updates on AI system deployment and governance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

中经评论:无人驾驶试点也要守住安全底线

2022-08-11
中国经济网
Why's our monitor labelling this an incident or hazard?
The article centers on the development and regulatory environment of autonomous driving AI systems and their pilot programs. It acknowledges potential safety risks and the immaturity of the technology but does not describe any realized harm or direct incident caused by AI. The discussion is forward-looking and policy-oriented, focusing on ensuring safety and managing risks rather than reporting an actual AI-related harm event. Therefore, it fits the definition of Complementary Information, as it provides context, updates, and governance perspectives on AI systems without describing a specific AI Incident or AI Hazard.
Thumbnail Image

悲剧!小鹏P7撞人致死,辅助驾驶惹的祸?回应来了

2022-08-11
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's LCC system is an AI-based assisted driving feature that helps keep the vehicle centered in the lane. The accident occurred while this system was active, and the system failed to recognize a hazard, contributing to the collision and fatality. The driver’s chat messages indicate reliance on the system and distraction, which aligns with indirect harm caused by the AI system's limitations and the user's misuse. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death. The event is not merely a potential hazard or complementary information but a realized harm involving an AI system.
Thumbnail Image

正在路测!小米造车进展:自动驾驶项目投入33亿元

2022-08-11
internet.cnmo.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system—specifically, an autonomous driving AI system. However, the article does not describe any realized harm or incidents caused by the AI system. Instead, it details ongoing development, investment, and testing phases, which could plausibly lead to future AI incidents but currently do not report any harm or malfunction. Therefore, this is best classified as Complementary Information, providing context and updates on AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

德国汉堡将自动驾驶测试领域扩大到物流

2022-08-12
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous vehicles and delivery robots with AI-based computer vision and decision-making algorithms) in development and testing phases. However, there is no report of any injury, rights violation, disruption, or other harm caused by these AI systems. The article mainly provides information about ongoing research, testing expansions, and regulatory changes facilitating autonomous vehicle testing. Therefore, it does not describe an AI Incident or AI Hazard but rather provides complementary information about AI developments and governance in autonomous transport.
Thumbnail Image

小鹏汽车回应宁波高架撞人致死事故:全力配合调查

2022-08-11
青岛新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the assisted driving system LCC) that failed to detect a hazard, leading to a fatal collision. The harm is realized (death of a person), and the AI system's malfunction or limitation is a contributing factor. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to a person. Although driver distraction is mentioned, the AI system's failure to detect the hazard is central to the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

2022-08-11
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system, specifically an autonomous driving AI system under development and testing by Xiaomi. However, it does not report any realized harm or incidents caused by the AI system. Instead, it focuses on the development progress, investment, team composition, and future ambitions. There is no indication of any injury, rights violation, property damage, or other harms caused or occurring due to the AI system. The article also does not highlight any credible or imminent risk of harm from the AI system at this stage. Therefore, the event is best classified as Complementary Information, providing context and updates on AI system development and ecosystem evolution without describing an AI Incident or AI Hazard.
Thumbnail Image

2022-08-12
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
An AI system (the autonomous taxi) was involved in the incident, specifically during its use phase (testing). The accident was caused by a human driver rear-ending the autonomous vehicle, not by a malfunction or failure of the AI system itself. Since the AI system did not cause or contribute to harm, and no injury or other harm resulted from the AI system's malfunction or use, this does not qualify as an AI Incident. There is no indication of plausible future harm from the AI system in this context beyond normal traffic risks. Therefore, this event is best classified as Complementary Information, providing context on an AI system's involvement in a traffic accident without the AI system being at fault or causing harm.
Thumbnail Image

2022-08-12
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an automatic lane centering assistance feature (LCC) used in Xpeng vehicles. The system's failure to detect a pedestrian and prevent the collision directly caused injury, fulfilling the criteria for an AI Incident (harm to a person). The discussion about legal responsibility and regulatory gaps further supports the significance of the incident. Although the driver admitted distraction, the AI system's failure to act timely is a contributing factor. Hence, this is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

小鹏汽车回应辅助驾驶事故:全力配合进行事故调查

2022-08-11
China News
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the assisted driving system with lane centering control) that was active during the accident and failed to detect a hazard, leading to a collision with injuries and fatalities. The AI system's malfunction or failure to act is directly linked to the harm caused. Therefore, this qualifies as an AI Incident under the definition of harm to persons caused directly or indirectly by the AI system's malfunction or use.
Thumbnail Image

小鹏汽车回应宁波高架撞人致死事故:全力配合调查,持续跟进

2022-08-11
China News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely the assisted driving system (LCC and possibly NGP) in the Xiaopeng vehicle. The accident resulted in a fatality, which is a direct harm to a person caused by the AI system's failure to detect obstacles and assist properly. The involvement of the AI system in the accident's causation meets the criteria for an AI Incident, as the AI system's malfunction or limitation directly led to injury and death. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

近日,华依科技(688071)官方微信发布消息称,公司正式启动自动驾驶测试基地项目,该项目主要进行汽车自动驾驶相关的研发、测试、验证服务等业务。

2022-08-10
证券之星
Why's our monitor labelling this an incident or hazard?
The article reports on the initiation of a testing facility for autonomous driving technologies, which involves AI systems for vehicle autonomy and safety. However, it does not describe any realized harm or incidents caused by these AI systems, nor does it indicate any immediate risk or hazard resulting from the project. The information is about the development and expansion of AI-related testing infrastructure, which provides context to the AI ecosystem but does not itself constitute an incident or hazard. Therefore, this is best classified as Complementary Information.
Thumbnail Image

广州智能网联汽车开放测试道路较去年底增长57.6%

2022-08-12
China News
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems in intelligent connected vehicles and autonomous driving. However, the article does not report any harm or incidents resulting from these AI systems. Instead, it highlights growth, safety records, and regulatory advancements. There is no indication of realized or potential harm that would qualify as an AI Incident or AI Hazard. The content primarily provides contextual and supportive information about AI deployment and governance in the automotive sector, fitting the definition of Complementary Information.
Thumbnail Image

谁来为自动驾驶把稳"方向盘"

2022-08-11
China News
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving AI controlling vehicles without human drivers. It discusses the use and commercialization of these AI systems on public roads, including policies and safety guidelines to manage risks. However, it does not describe any actual accident, injury, or violation caused by these AI systems. Instead, it focuses on regulatory frameworks, pilot programs, and the potential for future harm if safety is not properly managed. This fits the definition of an AI Hazard, as the development and use of these autonomous driving AI systems could plausibly lead to incidents involving injury or property damage, but no such incident is reported here. It is not Complementary Information because the article is not primarily about responses to a past incident, nor is it Unrelated since it clearly involves AI systems and their societal impact.
Thumbnail Image

自动驾驶,别只盯着赚钱

2022-08-11
China News
Why's our monitor labelling this an incident or hazard?
The article centers on the regulatory framework and public discourse about autonomous driving AI systems, emphasizing safety and cautious commercial rollout. It does not describe any realized harm or direct incident caused by AI systems, nor does it report a near-miss or credible imminent risk event. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it provides context and updates on governance and societal responses to AI in autonomous vehicles, fitting the definition of Complementary Information.
Thumbnail Image

8月11日,小米集团召开雷军年度演讲暨新品发布会。会上,小米集团发布了包含手机、手表、耳机、平板、洗烘一体机、净烟机等在内的共7款产品。

2022-08-12
证券之星
Why's our monitor labelling this an incident or hazard?
The event describes the development and deployment progress of AI systems (autonomous driving and humanoid robots) but does not report any direct or indirect harm caused by these systems. The article focuses on product announcements, R&D progress, and investment plans, which are typical of Complementary Information. There is no mention of accidents, rights violations, or other harms linked to the AI systems. Therefore, this is not an AI Incident or AI Hazard but Complementary Information that enhances understanding of the AI ecosystem and ongoing developments.
Thumbnail Image

财联社8月12日讯(记者 徐昊)小鹏汽车自动驾驶技术再次造成的严重交通事故,引发行业对自动驾驶辅助技术发生事故时责任认定的讨论。

2022-08-12
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xpeng's AI-assisted driving system (LCC) during the accident, which failed to prevent the collision. This constitutes direct involvement of an AI system leading to harm (injury to a pedestrian). The discussion about legal responsibility further confirms the AI system's role in the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm to a person.
Thumbnail Image

雷军披露小米汽车最新进展:自动驾驶是第一个突破方向,已规划140辆测试车

2022-08-11
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving technology) in development and testing phases, but there is no indication of any realized harm or incident caused by the AI system. The article focuses on progress and plans rather than any harm or risk. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides supporting context about AI development and testing in the automotive sector.
Thumbnail Image

百度自动驾驶进展神速并一举超越特斯拉,智能交通在背后功不可没。

2022-08-10
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in autonomous vehicles (Robotaxi) operating without safety drivers on public roads, which is a direct use of AI systems influencing physical environments. The AI systems are actively used in commercial service, implying realized deployment rather than hypothetical or future potential. Although no specific harm or accident is reported, the use of AI in fully driverless vehicles on public roads inherently carries risk of injury or harm to people, which is a recognized harm category. The article also mentions intelligent traffic systems that coordinate with autonomous vehicles, further embedding AI in critical infrastructure management. Given the direct involvement of AI systems in a safety-critical application with plausible risk of harm, this event meets the criteria for an AI Incident. The absence of reported accidents does not negate the classification because the deployment itself in a real-world environment with potential for harm is sufficient. The article is not merely about research, policy, or general AI news, but about operational AI systems impacting public safety and infrastructure.
Thumbnail Image

财联社8月9日讯(编辑 刘越)交通运输部昨日发布征求意见稿,鼓励在条件相对可控的场景使用自动驾驶汽车从事出租汽车客运经营活动,从事运输经营的有条件自动驾驶和高度自动驾驶汽车应当配备驾驶员。

2022-08-09
证券之星
Why's our monitor labelling this an incident or hazard?
The article primarily provides complementary information about the regulatory encouragement for autonomous vehicle deployment, technological advancements in inertial navigation systems, market forecasts, and industry perspectives. There is no mention of any actual harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. The discussion of risks is speculative and framed as potential challenges rather than imminent hazards. Therefore, the content fits the definition of Complementary Information, as it enhances understanding of the AI ecosystem and ongoing developments without reporting a new AI Incident or AI Hazard.
Thumbnail Image

日前,重庆、武汉两地政府部门率先发布自动驾驶全无人商业化试点政策,并向百度发放全国首批无人化示范运营资格,允许车内无安全员的自动驾驶车辆在社会道路上开展商业化服务。

2022-08-10
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous driving technology, which is explicitly mentioned. However, the article does not describe any realized harm or incidents caused by these AI systems. Instead, it reports on regulatory approvals, market growth, and technology deployment plans, which are forward-looking and contextual. There is no indication of any accident, malfunction, or harm resulting from the AI systems. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information that provides context on AI ecosystem developments and governance.
Thumbnail Image

近日,交通部发布《自动驾驶汽车运输安全服务指南 (试行)》征求意见稿。《征求意见稿》对自动驾驶汽车运输的服务范围做出了规定,此外,明确了具体的应用场景。

2022-08-10
证券之星
Why's our monitor labelling this an incident or hazard?
The article primarily provides information about policy development, regulatory guidelines, pilot projects, and industry investment related to autonomous driving AI systems. It does not describe any realized harm, injury, rights violations, or disruptions caused by AI systems, nor does it report any near-miss or plausible future harm events. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it is best classified as Complementary Information because it offers important context on governance, safety standards, and the evolving AI ecosystem in autonomous driving.
Thumbnail Image

21新汽车 辅助驾驶又出事故?小鹏P7高速撞上故障车发生致命车祸

2022-08-11
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based assisted driving system (LCC) during the accident, which failed to detect a stationary vehicle and did not warn the driver, leading to a fatal collision. The AI system's malfunction or limitations are directly linked to the harm (death of a person). The involvement of AI in the development and use phases, the direct causation of harm, and the detailed discussion of system limitations and prior incidents confirm this as an AI Incident rather than a hazard or complementary information. The harm is realized and directly connected to the AI system's failure.
Thumbnail Image

谁来为自动驾驶把稳"方向盘"

2022-08-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of autonomous driving vehicles operating without human safety drivers on public roads. It details the regulatory and safety measures being introduced to mitigate risks but does not report any actual harm or accident caused by these AI systems. The potential for harm is clearly recognized, given the safety concerns and the need for insurance and emergency response protocols. Since the AI systems' use could plausibly lead to injury, property damage, or disruption if failures occur, this qualifies as an AI Hazard. There is no indication of realized harm or incident yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the deployment and regulatory challenges of AI systems that could lead to harm, rather than just updates or responses to past incidents.
Thumbnail Image

出发了,无人驾驶出租车!你敢坐吗?曾经的"不敢想象"还将诞生一系列伟大公司?

2022-08-09
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in the form of fully autonomous driverless taxis operating commercially in designated zones. This clearly involves AI systems in use. However, there is no indication of any injury, property damage, rights violations, or other harms caused by these AI systems at this stage. The article focuses on the introduction and potential future impact of the technology, as well as investment considerations, without describing any realized or imminent harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual and forward-looking information about AI deployment and market implications, fitting the definition of Complementary Information.
Thumbnail Image

小鹏智能驾驶致命车祸:高速撞向静止车辆,光线良好,系统里里外外失效

2022-08-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the Xiaopeng P7's intelligent driving system (XPilot 3.0)—which was active and responsible for vehicle control at the time of the accident. The system failed to detect a stationary vehicle and did not trigger any safety measures such as automatic emergency braking or driver alerts. This malfunction directly caused a fatal collision, resulting in loss of life, which is a clear harm to a person. The involvement of the AI system in the development, use, and malfunction stages is evident, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

小鹏汽车撞人致死,谁之过?

2022-08-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident where an AI-assisted driving system was active and involved. The AI system's perception and control functions were in use, but the driver was distracted and did not take over control when necessary, leading to a collision causing death. The AI system's technological limitations and the driver's misunderstanding of the system's capabilities are central to the incident. This fits the definition of an AI Incident as the AI system's use and malfunction (or limitations) directly and indirectly led to harm (death). The article also references similar past fatal incidents involving AI-assisted driving systems, reinforcing the classification. The presence of the AI system is explicit, the harm is realized, and the AI system's role is pivotal in the chain of events.
Thumbnail Image

小鹏P7发生致命车祸!辅助驾驶缘何频频出事

2022-08-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's assisted driving system is an AI system that uses sensor fusion, including cameras and millimeter-wave radar, and AI algorithms for perception and control. The accident was caused by the system's failure to detect a stationary vehicle and warn or brake, resulting in a fatal collision. This constitutes direct harm to a person caused by the malfunction and use of an AI system. Therefore, this event meets the criteria for an AI Incident due to injury and death caused by the AI system's malfunction and use.
Thumbnail Image

小鹏汽车撞人,辅助驾驶还敢用吗?

2022-08-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the driver assistance system (XPILOT 2.5) in the Xiaopeng P7 vehicle, which failed to recognize a stationary vehicle and a person, leading to a fatal collision. This is a direct harm to human life caused by the malfunction of an AI system in use. The article details the AI system's sensor and perception limitations and the resulting accident, fulfilling the criteria for an AI Incident due to injury or harm to a person caused by the AI system's malfunction during its use.
Thumbnail Image

半年1300人次尝鲜江心洲自动驾驶出租车,普及自动驾驶还有多远?

2022-08-10
xdkb.net
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (L4 autonomous vehicles) in use and development. However, it does not report any realized harm or incidents caused by these systems, nor does it highlight any credible risk of imminent harm. Instead, it focuses on the progress, regulatory environment, and industry outlook for autonomous driving technology. This fits the definition of Complementary Information, as it provides supporting context and updates on AI system deployment and governance without describing a new incident or hazard.
Thumbnail Image

小鹏P7追尾致人伤亡 司机开启辅助驾驶后分神_司机忘拉手刹 7岁男孩飞奔5秒刹停_小鹏汽车回应P7高架撞人致死_小鹏车主回应撞人致死:未自动刹车

2022-08-12
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaopeng P7's lane-centering assistance (LCC) and adaptive cruise control (ACC) features, which are AI-based driver assistance systems. The accident caused human injury and death, fulfilling the harm criteria. The AI system failed to detect a stationary vehicle and did not warn or brake automatically, which directly contributed to the collision. The driver's distraction while relying on the system further implicates the AI system's role. Therefore, this is an AI Incident due to the AI system's malfunction and use leading to fatal harm.
Thumbnail Image

小鹏汽车撞人致死,谁之过?_小鹏汽车回应P7高架撞人致死_小鹏车主回应撞人致死:未自动刹车_功能

2022-08-12
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
The Xiaopeng P7's lane keeping and adaptive cruise control system is an AI system that was active during the accident. The system's inability to detect the stationary vehicle and the driver's failure to intervene led directly to the fatal collision. The article clearly links the AI system's use and its technical limitations to the incident causing death and injury, fulfilling the criteria for an AI Incident. The involvement is direct and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

8月11日,一辆小鹏汽车p7在宁波一高架桥上最左道,与前方故障车辆车辆相撞。一个站在故障车尾部人员被撞亡。疑似涉事车主的聊天记录也流出:“当时高速限速80,我设定的就是80公里/每小时,然后开着LCC(车道居中辅助系统)的,之前都是有预警的,这次不知道为什么没有,我当时开车刚好还走神了。”小鹏汽车回应称:“经核实,8月10日下午,宁波一车主驾驶车辆与前方检查车辆故障人员发生碰撞,发生人员伤亡。目前...

2022-08-12
dingjinkun.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the Lane Centering Control (LCC) driver assistance system in the Xpeng vehicle. The system's failure to warn or prevent collision directly led to a fatal injury, constituting harm to a person. The incident is under legal investigation for product safety and misleading advertising, indicating the AI system's role in causing harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction and use directly caused injury and death.
Thumbnail Image

雷军:小米自动驾驶首期研发投入33亿,争取2024年进入行业第一阵营 来源: 北京日报客户端

2022-08-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on Xiaomi's research and development efforts in autonomous driving AI technology, including testing and team building, without mentioning any harm, malfunction, or misuse. There is no indication of injury, rights violations, property damage, or other harms caused or plausibly caused by the AI system at this stage. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it is a general update on AI system development and deployment plans, which fits the definition of Complementary Information.
Thumbnail Image

交通运输部发布重磅文件 自动驾驶有规可循了

2022-08-12
auto.cctv.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically autonomous driving vehicles, which are AI-powered systems making real-time decisions to navigate and operate vehicles. The guideline addresses the use and regulation of these AI systems to prevent harm and ensure safety. However, the article does not report any actual harm or incident caused by autonomous vehicles, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it focuses on policy and regulatory measures to manage potential risks and promote safe deployment. Therefore, this is best classified as Complementary Information, as it provides governance and societal response context to the AI ecosystem without describing a new AI Incident or AI Hazard.
Thumbnail Image

独家拿下自动驾驶全无人商业运营牌照,百度自动

2022-08-10
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving technology) in fully driverless commercial operations transporting passengers, which directly relates to AI systems influencing physical environments and human safety. The autonomous vehicles are operating without safety drivers, meaning the AI system's decisions directly affect passenger and public safety. The deployment and operation of these systems in real-world conditions constitute the use of AI systems. Since the article describes active commercial operation, the potential for harm to people (injury or health harm) is realized and ongoing, making this an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or future possibilities but reports on actual deployment and operation, which meets the criteria for an AI Incident.
Thumbnail Image

罚款200元记3分!重拳出击让更多人专心开车

2022-08-11
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (electronic police cameras with upgraded AI capabilities) to detect distracted driving. The use of AI here is in enforcement to reduce traffic accidents caused by distracted driving, which is a significant safety hazard. However, the article does not report any harm caused by the AI system itself, nor does it describe any malfunction or misuse leading to harm. Instead, it describes a governance and enforcement measure using AI to improve road safety. Therefore, this is complementary information about AI deployment and its societal impact rather than an incident or hazard.
Thumbnail Image

小鹏汽车回应辅助驾驶事故:将全力配合调查

2022-08-11
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an advanced driver assistance system (LCC) that failed to recognize a hazard, directly contributing to a traffic accident with injury or death. This meets the criteria for an AI Incident because the AI system's malfunction or failure to act led to harm to persons. The manufacturer's response and investigation do not negate the incident classification but provide context for ongoing assessment.
Thumbnail Image

北京严查“分心驾驶”!开车玩触屏,碰触的是安全红线

2022-08-11
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of optimized algorithmic logic and enforcement equipment to detect distracted driving, which involves AI systems for image recognition and tracking. However, the article does not describe any harm caused by the AI system or any malfunction. The harm discussed relates to distracted driving itself, which is a general road safety issue, not caused by AI. The AI system is used as a tool to improve enforcement accuracy and evidence collection. This fits the definition of Complementary Information, as it provides context on AI's application in traffic management and enforcement responses to a known safety issue, without reporting new AI-related harm or plausible future harm.
Thumbnail Image

无人驾驶商业化运营是否会创造更多上车可能?

2022-08-09
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically fully autonomous driving systems operating commercially without safety drivers. However, it does not report any direct or indirect harm caused by these systems, nor does it describe any incident or malfunction. While the deployment of such systems could plausibly lead to future harms, the article does not highlight any immediate risks or hazards. Instead, it mainly provides information about policy support and the progress of autonomous vehicle commercialization, which fits the definition of Complementary Information as it enhances understanding of the AI ecosystem and governance responses without reporting new incidents or hazards.
Thumbnail Image

“绿灯自由”、智能变道……自动驾驶级导航在亦庄上线

2022-08-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving and vehicle-road collaboration technologies) in a real-world setting, providing navigation and driving assistance that directly influence driver behavior and traffic management. However, the article does not report any harm or incidents resulting from these AI systems; instead, it highlights benefits and improvements in traffic experience. Therefore, this is not an AI Incident or AI Hazard but rather a report on the deployment and positive impact of AI technology, which fits the definition of Complementary Information as it provides context and updates on AI system use and societal impact without describing harm or plausible future harm.
Thumbnail Image

北京等三地官宣!已迈入“无人驾驶”时代?安全底线要守住

2022-08-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically autonomous driving systems, and discusses their development, use, and regulatory environment. However, it does not report any actual harm or accident caused by these AI systems, nor does it describe a specific event where harm was narrowly avoided. Instead, it focuses on the current status, safety challenges, and regulatory measures, which fits the definition of Complementary Information. It provides context and updates on AI deployment and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

小鹏P7发生致命车祸 辅助驾驶缘何频频出事 - cnBeta.COM 移动版

2022-08-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the XPilot 3.0 assisted driving system with AI-powered perception components (cameras, radars, neural network-based image recognition). The system's malfunction in failing to detect a stationary vehicle and not alerting the driver directly caused a fatal collision. This meets the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death. The detailed technical explanation of sensor and algorithm limitations confirms the AI system's pivotal role in the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

辅助驾驶不是自动驾驶 再怎么强调都不为过

2022-08-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (driver assistance system) whose use indirectly led to a car accident, which is harm to a person (a). The accident is a realized harm linked to the AI system's use and the driver's overreliance on it. Therefore, this qualifies as an AI Incident. The article also discusses the broader context of AI system capabilities and marketing but does not focus on new hazards or complementary information such as policy responses. Hence, the classification is AI Incident.
Thumbnail Image

国融证券:智能驾驶商业化运营开始落地 建议从三个方面把握投资机遇

2022-08-12
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in autonomous vehicles being used commercially, including fully driverless taxis operating on public roads, which qualifies as AI system involvement. The deployment and use of these systems could plausibly lead to harms such as injury or disruption if failures occur, making this an AI Hazard. However, since no actual harm or incident is reported, and the focus is on policy, commercialization, and investment opportunities, it does not meet the criteria for an AI Incident. It is not merely complementary information because the core subject is the deployment and operational use of AI systems with potential risks, not just updates or responses to past incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

最低1.6元可坐一次!自动驾驶车队在湘江新区开启收费运营

2022-08-13
华声在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving Robotaxi) in active use, but there is no indication of any injury, rights violation, disruption, or other harm caused or plausibly caused by the AI system. The article focuses on the service's launch, operational details, and infrastructure, which fits the definition of Complementary Information as it provides context and updates on AI deployment without reporting any incident or hazard. Therefore, the classification is Complimentary Info.
Thumbnail Image

小鹏汽车撞人,辅助驾驶还敢用吗?_小鹏车主回应撞人致死:未自动刹车_识别_静止

2022-08-13
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaopeng P7's driver assistance system (XPILOT 2.5), which uses AI-based sensors and algorithms to perceive the environment and assist driving. The system's failure to detect a stationary vehicle and a person directly led to a fatal collision, causing harm to a person (death). This meets the definition of an AI Incident because the AI system's malfunction (failure to recognize static obstacles) directly caused injury and death. The article also references previous similar incidents involving AI driver assistance systems, reinforcing the pattern of harm linked to AI system limitations. Hence, the classification as AI Incident is appropriate.