Fatal Xiaomi SU7 AI-Driven Crash in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Xiaomi SU7 electric vehicle, using its AI-powered driving assistance with an active autopilot, crashed on an expressway in Anhui province on March 29. The malfunction led to a fatal collision that killed three people. Xiaomi is cooperating with local authorities to investigate the incident.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly mentioned as the autonomous driving technology in the Xiaomi vehicle. The accident and resulting deaths are directly linked to the use and malfunction of this AI system, fulfilling the criteria for an AI Incident due to injury and harm to persons. The description clearly states the AI system was active and its failure to prevent the crash led to fatalities, which is direct harm caused by the AI system's use.[AI generated]
AI principles
SafetyRobustness & digital securityAccountability

Industries
Mobility and autonomous vehicles

Affected stakeholders
Consumers

Harm types
Physical (death)

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Acidente mortal com veículo da Xiaomi levanta questões sobre tecnologia de condução autónoma

2025-04-02
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving technology in the Xiaomi vehicle. The accident and resulting deaths are directly linked to the use and malfunction of this AI system, fulfilling the criteria for an AI Incident due to injury and harm to persons. The description clearly states the AI system was active and its failure to prevent the crash led to fatalities, which is direct harm caused by the AI system's use.
Thumbnail Image

Xiaomi shares drop to 6-week low after fatal SU7 EV crash By Investing.com

2025-04-02
Investing.com
Why's our monitor labelling this an incident or hazard?
The SU7 electric vehicle was operating in an autonomous driving mode (Navigate on Autopilot), which is an AI system that makes real-time driving decisions. The fatal crash resulting in three deaths is a direct harm caused by the AI system's operation. This meets the definition of an AI Incident because the AI system's use directly led to injury and death. The company's cooperation with investigations and the market reaction further confirm the seriousness of the incident.
Thumbnail Image

Chine: inquiétudes après un accident mortel d'une voiture autonome Xiaomi

2025-04-02
Le Figaro
Why's our monitor labelling this an incident or hazard?
The vehicle's autonomous driving feature qualifies as an AI system. The accident caused direct harm (fatalities), and the AI system's use is implicated in the incident. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Xiaomi Car With Driver Assistance Crashes On China Expressway, 3 Reported Dead - News18

2025-04-02
News18
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 car involved an accident with three fatalities, and the advanced driver assistance system was active less than 20 minutes before the crash. The system issued alerts about the driver not holding the steering wheel and obstacles on the road, indicating its involvement in the event. The crash into concrete fencing causing deaths is a direct harm linked to the AI system's use and possible malfunction or failure to prevent the accident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury and harm to persons.
Thumbnail Image

Xiaomi-Aktie unter Druck nach Unfall mit autonomem Fahrzeug

2025-04-02
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (the Navigation on Autopilot feature) in the vehicle at the time of the accident. The AI system's malfunction or failure to prevent the collision directly led to the deaths of the passengers, constituting injury or harm to persons. This meets the criteria for an AI Incident, as the AI system's use directly caused significant harm. The market and regulatory responses are complementary context but do not change the classification.
Thumbnail Image

Xiaomi SU7's fatal crash puts Chinese autonomous EVs under scrutiny

2025-04-03
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of the car's autonomous driving features at the time of the crash, indicating the involvement of an AI system. The crash caused direct harm (three deaths), which is a clear injury to persons. The AI system's inability to prevent the collision despite warnings suggests a malfunction or failure in the AI system's operation. Therefore, this event meets the definition of an AI Incident, as the AI system's use directly led to significant harm.
Thumbnail Image

Fatal car crash in Anhui raises public concern over use of smart driving

2025-04-03
chinadaily.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the vehicle's autonomous driving system. The fatal crash and resulting deaths constitute injury or harm to persons, fulfilling the criteria for an AI Incident. The AI system's use and possible malfunction or limitations in the autopilot function directly led to the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fatal car crash raises alarm bells on assisted driving tech

2025-04-02
The Standard
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in autonomous mode, which involves AI systems for assisted driving. The crash caused fatalities, which is a direct harm to persons. The AI system's failure to prevent the crash or properly manage the handover to the driver is a direct contributing factor to the harm. Hence, this qualifies as an AI Incident under the definition of an event where AI system use or malfunction has directly led to injury or harm to people.
Thumbnail Image

Tödlicher Unfall nach Fahrassistent-Alarm in China

2025-04-01
onvista
Why's our monitor labelling this an incident or hazard?
The vehicle's driving assistant system is an AI system as it performs autonomous navigation and decision-making tasks. The accident caused fatal injuries, which is a direct harm to persons. The AI system was active and involved in the sequence of events leading to the crash, including issuing an alarm and switching to manual mode. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (fatalities).
Thumbnail Image

Xiaomi cooperates with police after fatal SU7 EV accident

2025-04-01
Wion
Why's our monitor labelling this an incident or hazard?
The SU7 vehicle uses intelligent-assisted driving technology, which qualifies as an AI system under the framework. The fatal accident directly involves the AI system's operation, as the vehicle was in assisted driving mode when the crash occurred. The harm (fatality) has materialized, and the AI system's involvement is pivotal to the incident and investigation. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Automobilindustrie: Tödlicher Unfall nach Fahrassistent-Alarm in China

2025-04-01
News.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA driving assistant) whose use directly led to a fatal accident causing injury and death, which qualifies as harm to persons. The AI system's malfunction or failure to prevent the crash is central to the incident. Therefore, this is an AI Incident as per the definitions, since the AI system's use directly led to harm (fatal injuries).
Thumbnail Image

Xiaomi SU7 electric vehicle crash in Anhui leaves 3 dead

2025-04-01
SHINE
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's Navigate on Autopilot mode is an AI system that was active during the accident. The crash caused fatalities, which constitutes harm to persons. The AI system's detection and deceleration actions, combined with driver intervention, are part of the chain of events leading to the harm. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in a fatal accident.
Thumbnail Image

Xiaomi's electric vehicle accident in China impacts stock By Investing.com

2025-04-01
Investing.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of smart driving software in the electric vehicle accident that caused fatalities. Smart driving software is an AI system that influences vehicle operation and safety. The deaths of three individuals constitute injury or harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm directly linked to AI system use.
Thumbnail Image

Accident mortel en Xiaomi SU7 électrique : la marque explique pourquoi la voiture n'a pas freiné

2025-04-02
Frandroid
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA system is a semi-autonomous AI system that assists driving, including automatic emergency braking. The accident occurred while this AI system was active, and its limitations in obstacle detection contributed indirectly to the fatal crash. The driver's partial intervention and the system's inability to detect the concrete pole led to the collision and deaths. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction (or limitation) directly and indirectly led to injury and death, fulfilling the criteria for harm to persons.
Thumbnail Image

Xiaomi car with driving assistance crashes on expressway in China; 3 dead

2025-04-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's advanced driver assistance system, an AI system designed to assist driving but requiring driver attention. The crash, which caused three fatalities, occurred while the system was engaged and after warnings were issued, indicating the AI system's involvement in the incident. The harm (fatalities) is directly linked to the use and possible malfunction or misuse of the AI system. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury and death.
Thumbnail Image

Tödlicher Unfall mit Xiaomis erstem Elektroauto nach Autopilot-Nutzung

2025-04-01
winfuture.de
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's Autopilot is an AI system for autonomous driving. The accident occurred while the Autopilot was active, and despite warnings and a human attempt to regain control, the crash happened causing fatalities. This constitutes direct harm caused by the use and possible malfunction or limitations of the AI system. Therefore, this event qualifies as an AI Incident due to injury and death resulting from the AI system's involvement in vehicle operation.
Thumbnail Image

Xiaomi kooperiert mit Polizei nach tödlichem Unfall in China

2025-04-01
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA driving assistant) whose use directly led to a fatal accident causing harm to people, fulfilling the criteria for an AI Incident. The system's failure to prevent the crash despite switching to manual mode indicates malfunction or insufficient performance. The harm (deaths and property damage) is clearly linked to the AI system's operation during the event, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Drei Frauen sterben bei Unfall mit Xiaomi-E-Sportwagen in China

2025-04-01
Die Presse
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an AI system (the driving assistant/autopilot) whose malfunction or failure to prevent the crash directly led to harm (death of three people). The AI system was active and then switched off due to a construction zone, but the vehicle still crashed, indicating a malfunction or insufficient safety fallback. This meets the criteria for an AI Incident because the AI system's use and malfunction directly led to injury and death (harm to persons).
Thumbnail Image

Automobile : Xiaomi dans la tourmente après le crash fatal d'une de ses voitures en Chine

2025-04-02
Les Echos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's level 3 autonomous driving system at the time of the fatal crash, which directly led to the deaths of three individuals. The AI system's involvement is central to the event, as the vehicle was in autonomous mode and the investigation is focused on whether the AI system malfunctioned. This constitutes an AI Incident because the AI system's use and possible malfunction directly caused harm to people (fatal injuries).
Thumbnail Image

Xiaomi Electric car Crashes Using Autopilot - Research Snipers

2025-04-03
Research Snipers
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric car's autopilot system is an AI system as it performs autonomous driving functions based on camera inputs. The accident occurred while the AI system was active, and despite warnings, the vehicle crashed causing fatal injuries. This constitutes direct harm to people caused by the use and malfunction or failure of an AI system. Therefore, this event qualifies as an AI Incident under the OECD framework because the AI system's use directly led to injury and death.
Thumbnail Image

Xiaomi Shares Slide After SU7 Sedan With Intelligent Assisted Driving Crashes, Three Dead

2025-04-01
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 sedan was operating in an intelligent assisted driving mode, which is an AI system controlling or assisting vehicle navigation. The crash caused three fatalities, which is a direct harm to human life. The AI system's malfunction or limitations in handling the road conditions (construction and lane closure) contributed to the accident. The event involves the use and possible malfunction of an AI system leading to injury and death, fitting the definition of an AI Incident. The company's cooperation with authorities and data submission further confirms the AI system's central role in the incident.
Thumbnail Image

Xiaomi car crash involving self-driving feature sparks concerns in China

2025-04-01
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Xiaomi car's self-driving system was turned on and operating when the crash occurred, and that the system alerted the driver two seconds before impact. The crash caused fatalities, which is a direct harm to persons. The AI system's malfunction or failure to prevent the accident is a contributing factor. This meets the definition of an AI Incident because the AI system's use directly led to harm to people.
Thumbnail Image

Xiaomi SU7 EV Crash Sparks Safety Concerns, Shares Drop Over 4% - EconoTimes

2025-04-01
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of smart driving software, which can be reasonably inferred to be an AI system given its role in autonomous or advanced driver-assistance functions. The fatal crash causing three deaths constitutes direct harm to persons. The AI system's performance and reliability are under scrutiny, indicating its role in the incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use or malfunction.
Thumbnail Image

Xiaomi SU7 EV involved in fatal crash, leaving three dead: Accident raises questions about smart driving systems | Today News

2025-04-01
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the advanced driver assistance system was engaged before the crash and issued warnings, indicating AI system involvement in the vehicle's operation. The crash resulted in three fatalities, which is a direct harm to persons. The AI system's malfunction or failure to prevent the accident, or the overreliance on it by the driver, directly contributed to the harm. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and death.
Thumbnail Image

Assisted driving tech in focus after fatal electric car rash in China

2025-04-02
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was using Xiaomi's autonomous driving feature when it crashed into a guardrail, resulting in fatalities. The AI system warned of obstacles and attempted to decelerate but was unable to prevent the collision. This shows direct involvement of the AI system's use and malfunction in causing harm to people. The harm is realized and severe (fatalities), and the AI system's role is pivotal. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fatal Xiaomi crash raises questions about assisted driving tech in China

2025-04-02
The Manila Times
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-assisted driving mode at the time of the crash, indicating the involvement of an AI system. The crash caused fatal injuries to three people, which is a direct harm to health. The AI system's role is pivotal as it was controlling the vehicle and issued warnings before the accident, suggesting that the AI's performance or interaction with the driver contributed to the incident. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the fatal harm caused.
Thumbnail Image

Xiaomi car crash sparks concerns, BYD beats Tesla again: 7 EV reads

2025-04-02
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric vehicle's autonomous driving feature is an AI system involved in the incident. The crash resulted in three fatalities, which is a direct harm to human health caused by the AI system's use or malfunction. This meets the definition of an AI Incident as the AI system's involvement directly led to injury and death. The article explicitly links the autonomous driving feature to the accident, confirming AI system involvement and realized harm.
Thumbnail Image

Drei Tote bei Xiaomi-E-Auto-Crash mit aktiviertem Fahrassistenten

2025-04-02
Luxemburger Wort
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the driver assistance software active before and during the crash. The crash caused direct harm (three fatalities), fulfilling the criteria for an AI Incident. The AI system's warnings and its role in vehicle control are central to the incident. Therefore, this is classified as an AI Incident due to the direct link between the AI system's use and the resulting harm.
Thumbnail Image

Acidente mortal com veículo da Xiaomi levanta questões sobre tecnologia de condução autónoma - SAPO.pt - Última hora e notícias de hoje atualizadas ao minuto

2025-04-02
SAPO
Why's our monitor labelling this an incident or hazard?
The vehicle's autonomous driving system was active and issued a warning before returning control to the driver, but the collision still occurred, indicating a malfunction or failure in the AI system's operation or its interaction with the driver. The harm is direct and severe (three fatalities), meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system's use.
Thumbnail Image

Deadly Crash Casts Shadow Over Xiaomi's EV Ambitions

2025-04-01
caixinglobal.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of Xiaomi's AI-powered assisted driving system during the fatal crash, which led to the deaths of three university students. This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons. The involvement of the AI system in the vehicle's operation at the time of the crash establishes a direct link to the harm caused.
Thumbnail Image

Tödlicher Unfall mit Xiaomis erstem Elektroauto nach Autopilot-Nutzung

2025-04-01
m.winfuture.de
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's 'Navigate on Autopilot' is an AI system for autonomous driving. The accident occurred while this AI system was in use, and despite warnings, the vehicle crashed fatally. This directly led to injury and death, fulfilling the criteria for an AI Incident as the AI system's use and possible malfunction contributed to the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Tödlicher Unfall im Xiaomi SU7: War der Autopilot Schuld?

2025-04-01
AUTO BILD
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's autopilot system is an AI system as it performs autonomous navigation and decision-making. The accident occurred while the AI system was active and involved in the vehicle's operation. The fatal injuries to the passengers constitute harm to persons. The AI system's role in switching modes and issuing warnings, as well as the insufficient braking, suggests its involvement in the chain of events leading to the harm. Therefore, this qualifies as an AI Incident due to direct harm caused during the use of an AI system in a critical safety context.
Thumbnail Image

Xiaomi Driverless Technology in Focus After Fatal Electric Car Crash

2025-04-01
The Athletic
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's autonomous driving AI system during the fatal crash, which directly caused harm (three deaths). The AI system's inability to prevent the collision despite warnings indicates a malfunction or failure in the AI's operation. This meets the criteria for an AI Incident, as the AI system's use directly led to injury and death. The involvement is not merely potential or future harm, but realized harm, so it is not an AI Hazard. It is not Complementary Information or Unrelated because the core event is the fatal crash linked to the AI system's operation.
Thumbnail Image

Xiaomi shares fall 5.5% after SU7 EV accident claims three lives

2025-04-01
Dimsum Daily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of Xiaomi's intelligent driving system (Navigate on Autopilot) engaged before the crash, with warnings issued by the system and driver alerts. The accident caused three fatalities, which is a direct harm to persons. The AI system's operation and its interaction with the driver are central to the event, indicating that the AI system's use and possible malfunction or limitations contributed to the harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Xiaomi: Vertrauen der Anleger nach Unfall erschüttert

2025-04-01
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically the autonomous driving technology in the Xiaomi electric vehicle. The accident occurred while the vehicle was operating in an AI-driven mode, and the failure of this system to prevent the crash constitutes a malfunction leading to harm. The article indicates direct harm resulting from the AI system's malfunction, fulfilling the criteria for an AI Incident. The impact on investor confidence and regulatory concerns further underscore the significance of the incident.
Thumbnail Image

Xiaomi car crash kills three, sparks probe over driver assistance tech

2025-04-01
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's advanced driver assistance system being active before the crash, with warnings issued to the driver who failed to maintain control. The crash caused three deaths, constituting injury or harm to persons. The AI system's use and possible malfunction or misuse directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm involving an AI system.
Thumbnail Image

Xiaomi electric car accident in China cause company's shares to fall

2025-04-01
today.az
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions an AI system (driver assistance system) in use at the time of the accident, which led to fatalities and property damage. The system's warnings and subsequent failure to prevent the collision indicate its direct involvement in the harm. The presence of fatalities and destruction of the vehicle meets the criteria for injury and harm to persons and harm to property. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xiaomi car with driver assistance crashes, three reported dead

2025-04-01
The Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Xiaomi's advanced driver assistance AI system was active less than 20 minutes before the crash and issued warnings that the driver ignored or responded too late. The crash caused three deaths, which is a clear harm to persons. The AI system's involvement in the accident is direct, as it was controlling or assisting vehicle operation and failed to prevent the crash. This meets the definition of an AI Incident, as the AI system's use directly led to injury and death. The event is not merely a potential hazard or complementary information but a realized harm linked to AI system use.
Thumbnail Image

Fatal Crash in China Puts Assisted Driving Tech Under Scrutiny

2025-04-02
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's autonomous driving AI system during the crash, which directly led to fatal injuries. The AI system warned of obstacles and attempted to decelerate but failed to prevent the collision. This constitutes an AI Incident because the AI system's use and possible malfunction directly caused harm to people. The involvement of the AI system in the vehicle's operation and the resulting fatalities meet the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Xiaomi EV Crash in China Raises Concerns About Smart Driving Software

2025-04-02
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric vehicle uses smart driving software, which qualifies as an AI system due to its autonomous or semi-autonomous driving capabilities. The accident caused deaths, which is a direct harm to people. Since the AI system's malfunction or failure is implicated in the crash, this constitutes an AI Incident under the definition of harm caused by the use or malfunction of an AI system.
Thumbnail Image

Assisted driving tech in focus after fatal electric car crash in China

2025-04-02
The Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the assisted-driving Navigate On Autopilot feature) whose use directly led to a fatal crash causing injury and death. The AI system was active, detected obstacles, and attempted to respond but was unable to prevent the collision. This meets the criteria for an AI Incident as the AI system's malfunction or limitations directly contributed to harm to persons. The presence of the AI system is clear, the harm is realized, and the causal link is direct.
Thumbnail Image

Three reported dead after Xiaomi car with driver assistance crashes

2025-04-02
The Japan Times
Why's our monitor labelling this an incident or hazard?
The vehicle's advanced driver assistance system qualifies as an AI system due to its autonomous or semi-autonomous driving capabilities. The crash causing fatalities directly links the AI system's use to harm to persons, fulfilling the criteria for an AI Incident. The report indicates the harm has occurred, not just a potential risk, so it is not merely a hazard. The involvement of the AI system in the accident and resulting deaths justifies classification as an AI Incident.
Thumbnail Image

Fatal Xiaomi crash raises questions about assisted driving tech in...

2025-04-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Xiaomi's Navigate On Autopilot assisted driving mode) that was in use immediately before and during the fatal crash. The AI system's malfunction or failure to prevent the accident, despite detecting an obstacle and warning the driver, directly contributed to the deaths of three individuals, which is a clear harm to human health. The incident is not merely a potential risk but a realized harm caused by the AI system's use, thus qualifying as an AI Incident rather than a hazard or complementary information. The investigation and public scrutiny further emphasize the significance of the AI system's role in the incident.
Thumbnail Image

Fatal Xiaomi crash raises questions about assisted driving tech in China

2025-04-02
Yahoo News
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in an AI-assisted driving mode, which is an AI system as it infers from input (road conditions, obstacles) to generate outputs (warnings, control handover) influencing the vehicle's operation. The fatal crash and resulting deaths are direct harms caused during the use of this AI system. The company's investigation and public scrutiny further confirm the AI system's pivotal role in the incident. Hence, this is an AI Incident due to direct harm to persons caused by the AI system's use and malfunction or limitations.
Thumbnail Image

China's Xiaomi says it is cooperating with police after fatal EV...

2025-04-01
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The SU7 EV's Navigate on Autopilot mode is an AI system that assists driving. The fatal accident occurred while this AI system was engaged, and although a human driver attempted to intervene, the collision still happened. This shows the AI system's malfunction or failure to prevent harm, directly leading to injury or death. Therefore, this qualifies as an AI Incident due to injury/harm to persons caused directly or indirectly by the AI system's use.
Thumbnail Image

Xiaomi's Electric Vehicle Accident Raises Questions on Smart Driving Technology

2025-04-01
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric vehicle was operating in autopilot mode, an AI system for smart driving, when it crashed into a cement pole causing fatalities. The AI system's failure to safely navigate and prevent the accident directly caused harm to human life, which is a clear injury to persons. This meets the definition of an AI Incident as the AI system's malfunction directly led to harm. The lack of advanced LiDAR technology is noted as a contributing factor to the system's limitations. Therefore, this event is classified as an AI Incident.
Thumbnail Image

China's Xiaomi says it is cooperating with police after fatal EV accident

2025-04-01
1470 & 100.3 WMBD
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was in an AI-assisted driving mode (Navigate on Autopilot) before the fatal accident, indicating the involvement of an AI system. The accident caused death, which is a direct harm to persons. Xiaomi's cooperation with police and provision of system data further confirms the AI system's role in the event. Hence, this is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

China's Xiaomi says it is cooperating with police after fatal EV accident

2025-04-01
The Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based assisted driving system (Navigate on Autopilot) in the vehicle involved in a fatal accident. The AI system was active and controlling the vehicle before the driver intervened and the collision occurred. This directly links the AI system's use to harm to human life, fulfilling the criteria for an AI Incident. The involvement of the AI system in the accident and resulting fatalities meets the definition of an AI Incident as it caused injury or harm to persons.
Thumbnail Image

Autopilot ausgesetzt: Drei Frauen starben bei Aufprall in China

2025-04-01
Kleine Zeitung
Why's our monitor labelling this an incident or hazard?
The Xiaomi vehicle's Navigation on Autopilot is an AI system managing driving tasks. The incident resulted in the death of three people, constituting harm to persons. The AI system was active and then switched to manual mode, but the crash still occurred, indicating a malfunction or failure in the AI system's operation or handover process. This meets the criteria for an AI Incident as the AI system's use directly or indirectly led to injury or death. The lack of information on door opening may also suggest further AI-related safety issues. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Xiaomi's Stock Plunges After Chinese Tech Giant Shares Details About Fatal SU7 Car Crash

2025-04-01
yicaiglobal.com
Why's our monitor labelling this an incident or hazard?
The SU7's Navigate on Autopilot mode is an AI system that infers from sensor inputs to control vehicle behavior. The crash, which caused three fatalities, occurred while the AI system was engaged, indicating the AI system's involvement in the incident. The event is a clear AI Incident because the AI system's use directly led to injury and death. The company's cooperation with authorities and investigation does not negate the fact that harm occurred due to the AI system's operation.
Thumbnail Image

China's Xiaomi says cooperating with police over fatal EV accident

2025-04-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes a fatal accident involving a Xiaomi electric vehicle operating in an AI-assisted driving mode (Navigate on Autopilot). The AI system was active and controlling the vehicle before the crash, which led to a fatality. The company's cooperation with police and provision of system data further confirms the AI system's involvement. This meets the criteria for an AI Incident because the AI system's use directly led to harm to a person.
Thumbnail Image

Der Tag: E-Auto-Fahrassistent schaltet sich plötzlich ab - drei Frauen sterben bei Unfall in China

2025-04-01
n-tv.de
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the NOA driving assistant) whose use directly led to a fatal accident causing injury and death (harm to persons). The AI system was active and then switched modes shortly before the crash, indicating its involvement in the chain of events leading to harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use/malfunction and the resulting fatalities.
Thumbnail Image

Tödlicher Unfall nach Fahrassistent-Alarm in China

2025-04-01
finanzen.ch
Why's our monitor labelling this an incident or hazard?
The Xiaomi car's Navigation on Autopilot system is an AI system as it performs autonomous driving assistance functions. The accident occurred while the AI system was active and gave an alert but the vehicle still crashed, causing fatalities. This is a direct harm to persons caused by the use and possible malfunction or failure of the AI system. Therefore, this event meets the criteria for an AI Incident due to injury and death linked to the AI system's operation.
Thumbnail Image

Verkehrsunglück in China: Xiaomi will nach tödlichem Unfall nach Fahrassistent-Alarm mit Polizei kooperieren

2025-04-01
Der Spiegel magazine
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-based driving assistance system that attempted to alert the driver and switch to manual mode but failed to prevent a collision causing a fatal accident. This constitutes direct harm to a person caused by the AI system's malfunction or insufficient intervention. Therefore, it qualifies as an AI Incident under the definition of injury or harm to a person resulting from the use or malfunction of an AI system.
Thumbnail Image

China: Tödlicher Unfall beim Autonomen Fahren

2025-04-01
BRF Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly, as the car was autonomously driving and switched modes due to an alert. The fatal crash and resulting deaths are direct harms caused by the AI system's operation or malfunction. The incident meets the criteria for an AI Incident because the AI system's use directly led to injury and death. The discussion about doors not opening further indicates potential safety failures related to the AI system or vehicle design. Hence, this is not merely a hazard or complementary information but a confirmed AI Incident.
Thumbnail Image

Alarm bei Tempo 97 in China: E-Auto mit Fahrassistenz rast in Baustelle - drei Tote

2025-04-01
n-tv.de
Why's our monitor labelling this an incident or hazard?
The vehicle's AI driving assistance system was actively engaged and attempted to alert and switch control before the crash, indicating AI involvement in the event. The crash caused direct harm to human life (three fatalities), fulfilling the criteria for an AI Incident. The AI system's failure to prevent the accident or adequately manage the situation is central to the harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use/malfunction and the fatal outcome.
Thumbnail Image

Xiaomi's Shares Fall After Fatal Car Accident Involving One of Its EVs

2025-04-01
Morningstar, Inc.
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the advanced driver-assistance system in Xiaomi's EV—operating at the time of a fatal accident causing three deaths. The system was in use and its outputs (notifications, speed adjustments) were part of the sequence leading to the crash. The harm (fatalities) has occurred, and the AI system's role is pivotal in the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xiaomi-Aktie verliert deutlich: Drei Tote bei E-Auto-Crash in China

2025-04-01
finanzen.at
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of the autonomous or assisted driving system in the Xiaomi electric vehicle, which switched modes and failed to avoid a fatal collision. The crash caused direct harm to people (three deaths), fulfilling the criteria for an AI Incident. The involvement of the AI system is central to the event, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Tödlicher Unfall nach Fahrassistent-Alarm in China

2025-04-01
Nau
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an AI system (the driving assistant NOA) whose use directly led to harm (three deaths). The AI system was active and gave an alert before the crash, indicating its operational role. The crash and resulting fatalities constitute injury or harm to persons, fulfilling the criteria for an AI Incident. Although the precise malfunction or failure cause is still being investigated, the AI system's involvement in the accident and harm is explicit and direct.
Thumbnail Image

Hat Autopilot versagt? E-Auto von Xiaomi verunglückt in Baustelle

2025-04-02
CHIP
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's Navigation on Autopilot system is an AI system as it involves autonomous driving assistance using cameras, radar, ultrasound, and lidar sensors to make real-time driving decisions. The accident occurred while the AI system was active or transitioning, and the collision caused fatal injuries, which qualifies as direct harm to persons. The event is a clear AI Incident because the AI system's use and possible malfunction or failure contributed directly to the fatal crash. The investigation into the cause and the discussion about the system's performance further support this classification.
Thumbnail Image

Verkehrsunglück in China: Xiaomi will nach tödlichem Unfall nach Fahrassistent-Alarm mit Polizei kooperieren

2025-04-01
manager-magazin.de
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the driving assistant NOA) whose use directly contributed to a fatal accident causing harm to people (three deaths). The AI system's malfunction or failure to prevent the crash, despite issuing an alert, is a direct factor in the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and death.
Thumbnail Image

Xiaomi-Aktie: Crash erschüttert das Vertrauen der Anleger

2025-04-01
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the intelligent driving assistant in the Xiaomi SU7 electric vehicle, which was active at the time of the crash. The crash is a direct harm event involving the AI system's use and possible malfunction or failure to prevent the accident. The harm includes physical risk to persons and damage to property. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm and disruption, and the event is not merely a potential hazard or complementary information.
Thumbnail Image

Fatal Xiaomi Crash Sparks Debate Over Assisted Driving Tech in China -

2025-04-02
Daily Observer || Daily Newspaper in Bangladesh
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in an AI-driven autonomous mode at the time of the crash, and the accident resulted in fatalities. The AI system's involvement in controlling the vehicle and issuing warnings before the crash indicates that the AI system's use and potential malfunction or failure to prevent the accident directly contributed to the harm. Therefore, this qualifies as an AI Incident due to injury and harm to persons caused by the AI system's operation.
Thumbnail Image

Xiaomi car crash sparks concerns, BYD beats Tesla again: 7 EV reads

2025-04-02
Today Headline
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the autonomous driving feature) whose malfunction or failure directly caused injury and death, fulfilling the criteria for an AI Incident. The involvement of the AI system in the crash is explicit, and the harm (loss of life) is clearly stated. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Families Demand Answers After Fatal Xiaomi SU7 Crash

2025-04-02
yicaiglobal.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in Navigation Assisted Driving mode, which involves AI systems for perception and decision-making. The article details how the AI system detected a barrier but failed to trigger timely emergency braking, leading to a high-speed crash. The malfunction of AI algorithms and emergency response features (e.g., door locks failing to open) contributed to the fatalities. The involvement of AI in the vehicle's operation and the direct link to fatal injuries meet the criteria for an AI Incident. The article also discusses flaws in the AI perception algorithm and safety design, confirming the AI system's role in causing harm.
Thumbnail Image

Chine : inquiétudes après un accident mortel d'une voiture autonome Xiaomi

2025-04-02
rtl.fr
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in autonomous mode, which involves an AI system controlling navigation and driving decisions. The crash directly caused the deaths of three individuals, constituting injury or harm to persons. The AI system's malfunction or failure to adequately prevent the collision is a direct contributing factor. Hence, this event meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

Acidente fatal suscita dúvidas sobre tecnologia de condução autónoma da Xiaomi

2025-04-02
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving technology in the Xiaomi vehicle. The system was active and made decisions (detecting obstacle, issuing warning, returning control) immediately before the fatal crash. The crash caused direct harm (death of three people), fulfilling the criteria for an AI Incident. The AI system's malfunction or failure to adequately prevent the accident is a direct contributing factor to the harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Acidente fatal suscita dúvidas sobre condução autónoma da Xiaomi

2025-04-02
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving technology in the Xiaomi vehicle. The use of this AI system directly led to a fatal accident causing injury and death, fulfilling the criteria for an AI Incident. The system's malfunction or failure to prevent the crash, despite issuing a warning and returning control to the driver, is central to the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

3 étudiantes décèdent après que leur véhicule électrique autonome a percuté une barrière en béton en Chine

2025-04-02
La Provence
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in autonomous mode (Navigate On Autopilot) when it crashed, directly linking the AI system's use to the fatal accident. The harm is realized (three deaths), and the AI system's failure or malfunction is a contributing factor. This meets the definition of an AI Incident as the AI system's use has directly led to injury and death. The investigation and public concern further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Chine: inquiétudes après un accident mortel d'une voiture autonome Xiaomi

2025-04-02
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in an autonomous driving mode, which is an AI system that makes real-time decisions about navigation and control. The accident directly caused fatalities, which is a clear harm to persons. The AI system's detection and warning, followed by the crash, indicate a malfunction or failure in the AI system's operation or its interaction with the driver. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly led to injury and death.
Thumbnail Image

Chine : inquiétudes après l'accident d'une voiture autonome Xiaomi qui a tué trois étudiantes

2025-04-02
leparisien.fr
Why's our monitor labelling this an incident or hazard?
The vehicle's autonomous driving feature qualifies as an AI system because it performs real-time decision-making for navigation and control. The accident caused direct harm (deaths), fulfilling the criteria for an AI Incident. Xiaomi's cooperation with police indicates the AI system's role is under investigation, but the fatal outcome is clear and directly linked to the AI system's use.
Thumbnail Image

First Fatal Xiaomi SU7 Crash Sparks Questions About Self-Driving Tech And Locked Doors | Carscoops

2025-04-02
Carscoops
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's Navigate on Autopilot system is an AI system controlling vehicle navigation and braking. The crash occurred while the AI system was engaged and issuing warnings, indicating its involvement in the event. The fatalities constitute injury or harm to persons, fulfilling the harm criteria. The possible failure of the doors to unlock post-impact further indicates malfunction or safety issues related to the AI system or vehicle design. Hence, the event meets the definition of an AI Incident as the AI system's use and possible malfunction directly led to fatal harm.
Thumbnail Image

Xiaomi crash sparks concern over assisted driving

2025-04-02
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was in Xiaomi's Navigate On Autopilot assisted driving mode before the crash, indicating the involvement of an AI system. The crash caused fatalities and property damage (vehicle fire), which are direct harms to persons and property. The AI system's failure to prevent the crash or adequately hand control to the driver contributed to the incident. Hence, this is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Chinese driverless car chief breaks silence after first fatal crash

2025-04-02
Newsweek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA intelligent-assisted driving mode) that was in use during the crash. The system's operation and interaction with the driver were central to the incident. The crash caused direct harm to human life, fulfilling the criteria for an AI Incident. The description indicates the AI system's role in the accident, including its detection of driver inattention and the vehicle's behavior leading up to the collision. Therefore, this is classified as an AI Incident due to the direct harm caused and the AI system's involvement in the fatal crash.
Thumbnail Image

Xiaomi EV crash sparks doubt about assisted driving tech

2025-04-02
newagebd.net
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the assisted driving mode) whose use directly led to a fatal accident, causing injury and death (harm to persons). The AI system's malfunction or limitations in handling the roadwork obstacle and subsequent crash are central to the incident. Therefore, this qualifies as an AI Incident because the AI system's use directly contributed to harm (fatalities) and raises serious safety concerns.
Thumbnail Image

Xiaomi SU7 Crashes With ADAS On, Three Reported Dead

2025-04-02
DSF.my | Drive Safe & Fast
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ADAS with Navigation Assisted Driving) that was active and issuing alerts before the crash. The system's use and the driver's failure to comply with safety requirements led directly to a fatal accident causing harm to people. This meets the definition of an AI Incident because the AI system's use and malfunction (or limitations) directly led to injury and death. The involvement of the AI system is explicit, and the harm is realized and severe.
Thumbnail Image

Acidente fatal com Xiaomi SU7 traz receios sobre a tecnologia de condução autónoma

2025-04-02
Pplware
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's "Navigation in Autopilot" system is an AI system providing advanced driver assistance with semi-autonomous capabilities. The accident involved the use of this AI system, which detected an obstacle and issued warnings but ultimately failed to prevent a fatal collision. The harm is direct and severe, involving loss of life and property damage. The event meets the criteria for an AI Incident because the AI system's use directly led to injury and death, fulfilling the harm to persons criterion. The investigation and public concern further confirm the AI system's pivotal role in the incident.
Thumbnail Image

O 1º carro da Xiaomi teve um acidente fatal: eis a resposta da marca

2025-04-02
4gnews | Notícias de tecnologia e reviews especializadas
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's autonomous driving system qualifies as an AI system because it performs real-time decision-making and control of the vehicle. The accident caused direct harm to human life, fulfilling the criteria for an AI Incident. The AI system's involvement is explicit, as the pilot automatic mode was active and the system attempted to respond to obstacles but could not prevent the fatal crash. Therefore, this event is classified as an AI Incident due to the direct causal link between the AI system's use and the resulting fatalities.
Thumbnail Image

CATL Distances Itself From Xiaomi Crash

2025-04-02
caixinglobal.com
Why's our monitor labelling this an incident or hazard?
The assisted driving mode qualifies as an AI system because it involves autonomous or semi-autonomous vehicle control. The crash resulted in fatalities, which is harm to persons. However, the article does not provide evidence that the AI system malfunctioned or caused the crash; it only states the crash occurred while in assisted driving mode. CATL's statement distances its battery from the incident, suggesting no direct link to the battery's role. Without clear indication that the AI system's development, use, or malfunction led to the harm, this event does not meet the criteria for an AI Incident. It also does not present a plausible future harm scenario beyond the incident itself, nor does it provide updates or governance responses. Therefore, it is best classified as Complementary Information, providing context and clarification about the incident and involved parties.
Thumbnail Image

Chine : un mastodonte face à des critiques après un accident tragique

2025-04-02
La Nouvelle Tribune
Why's our monitor labelling this an incident or hazard?
The vehicle's autonomous driving system is explicitly mentioned and was actively controlling the vehicle at the time of the accident. The AI system's malfunction or failure to prevent the collision directly led to fatal injuries and property damage (vehicle fire). This fits the definition of an AI Incident because the AI system's use and malfunction directly caused harm to people. The involvement of the AI system is clear and central to the event, and the harm is realized (fatalities).
Thumbnail Image

E-Auto von Xiaomi baut tödlichen Unfall in China: SU7 war im Autopilot

2025-04-02
Business Insider
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's 'Navigate on Autopilot' mode is an AI system that autonomously controls driving functions. The accident involved the use of this AI system, which failed to prevent a fatal collision despite recognizing obstacles and issuing warnings. The deaths of three individuals constitute injury or harm to persons, fulfilling the criteria for an AI Incident. The AI system's malfunction or insufficient performance directly led to this harm, making this event an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Assisted driving tech in China in crosshairs after fatal Xiaomi crash

2025-04-02
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's assisted driving AI system immediately before the fatal crash. The AI system's malfunction or failure to adequately manage the driving situation on a highway with roadworks directly led to the deaths of three college students. The harm is realized and significant, involving injury and loss of life. The AI system's role is pivotal as it was controlling the vehicle at the time, and questions about its performance and safety are central to the incident. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Im Autopilot: E-Auto von Xiaomi baut tödlichen Unfall in China

2025-04-02
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Navigate on Autopilot) in the vehicle at the time of the accident. The AI system detected an obstacle and applied braking, but the collision still occurred, leading to fatalities. This indicates a malfunction or failure in the AI system's operation contributing directly to harm (death of three people). The involvement of the AI system in the development, use, or malfunction leading to injury meets the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system's use.
Thumbnail Image

Le véhicule percute une barrière en béton sur l'autoroute et prend feu : ne pouvant ouvrir les portes pour s'échapper, trois étudiantes meurent dans une voiture électrique autonome

2025-04-02
lindependant.fr
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in an AI-driven autonomous mode at the time of the accident, and the system's detection and warning preceded the crash. The fatal injuries and fire are direct harms linked to the AI system's operation or failure. This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons. The investigation and public concern further confirm the AI system's pivotal role in the incident.
Thumbnail Image

Xiaomi EV Crash In China Raises Safety Concerns Over Assisted Driving Mode

2025-04-02
ndtv.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Xiaomi's Navigate On Autopilot assisted driving mode) that was active immediately before and during the crash. The AI system detected an obstacle and handed control to the driver, but the vehicle still collided with a barrier at high speed, causing fatalities and property damage. This constitutes direct harm to persons and property caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Fatal Xiaomi crash raises questions about assisted driving tech in China

2025-04-02
newsR
Why's our monitor labelling this an incident or hazard?
The vehicle was in autonomous mode, indicating the involvement of an AI system controlling the car's driving functions. The crash caused fatalities, which is a direct harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death. The ongoing police investigation further supports the seriousness of the incident.
Thumbnail Image

Acidente fatal suscita dúvidas sobre tecnologia de condução autónoma da Xiaomi - Renascença

2025-04-02
Rádio Renascença
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the autonomous driving technology in the Xiaomi vehicle. The use of this AI system directly led to a fatal accident causing harm to people, fulfilling the criteria for an AI Incident. The system was in use (Navigation Autopilot mode) and despite issuing a warning and returning control to the driver, the crash occurred. This is a clear case of harm caused directly or indirectly by the AI system's use. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xiaomi unter Druck: Tödlicher Unfall mit Elektroauto lässt Aktien sinken!

2025-04-03
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Navigate on Autopilot') in the electric vehicle at the time of the fatal accident. The AI system's malfunction or failure to prevent the crash directly led to injury and death, fulfilling the criteria for harm to a person. The event involves the use of an AI system and has resulted in realized harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xiaomi crash in China raises questions about autopilot - electrive.com

2025-04-02
electrive.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi electric car was using an AI-based autopilot system (NOA) that was active before the crash. The system switched off and the driver took over, but the vehicle still crashed into a barrier at high speed, resulting in three fatalities. The AI system's malfunction or limitations in handling the roadworks situation contributed directly to the incident. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons. The ongoing police investigation and company statements confirm the AI system's role in the event.
Thumbnail Image

Polizei ermittelt nach tödlichem Unfall mit halb-autonomem Auto in China

2025-04-02
unternehmen-heute.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the semi-autonomous driving system) whose malfunction or failure to adequately manage the driving situation directly caused a fatal accident, resulting in injury and death. This fits the definition of an AI Incident, as the AI system's use and malfunction led directly to harm to persons. The investigation against the manufacturer further supports the significance of the AI system's role in the incident.
Thumbnail Image

Xiaomi EV Collision in China Sparks Worries Over Smart Driving Technology - Thailand Business News

2025-04-04
Thailand Business News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a crash involving Xiaomi's EV with smart driving software, resulting in injuries. Smart driving software qualifies as an AI system due to its autonomous or advanced driver-assistance capabilities. The harm (injuries) directly stems from the use of this AI system, fulfilling the criteria for an AI Incident. The discussion about regulatory implications and safety standards further supports the significance of the AI system's role in the incident.
Thumbnail Image

Did autopilot fail? Xiaomi SU7 crash kills three, founder pledges full cooperation

2025-04-03
HT Auto
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's Autopilot mode is an AI system that was active during the crash. The system's inability to detect certain obstacles and prevent the collision directly contributed to the fatal accident, causing injury and death. This meets the definition of an AI Incident because the AI system's malfunction and use directly led to harm to people. The company's acknowledgment of system limitations and the ongoing investigation further support this classification.
Thumbnail Image

Les occupants d'une Xiaomi SU7 meurent dans le crash du véhicule, l'autopilote est-il en cause ?

2025-04-03
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's autonomous driving system is an AI system as it performs real-time navigation and decision-making. The event involves the use of this AI system, which directly preceded and contributed to the fatal crash. The deaths of the occupants represent injury or harm to persons caused directly or indirectly by the AI system's operation. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people.
Thumbnail Image

Fatal highway accident sparks smart driving safety debate on social media; expert suggests to reduce reliance

2025-04-03
en.people.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 EV was operating in NOA intelligent assisted driving mode, an AI system that assists driving tasks but is not fully autonomous. The fatal accident and resulting deaths directly involve the AI system's use, fulfilling the criteria for an AI Incident. The discussion about overreliance and misleading marketing further supports the AI system's role in the harm. Although the exact cause is under investigation, the AI system's operation at the time of the accident and the resulting fatalities constitute direct harm. The event is not merely a potential hazard or complementary information but a realized incident involving AI.
Thumbnail Image

Assisted Driving Tech in Focus After Fatal Electric Car Crash in China

2025-04-01
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's autonomous driving feature during the crash, indicating the presence of an AI system. The AI system's malfunction or limitation (failure to avoid the collision despite warnings and deceleration) directly led to a fatal accident causing loss of life. This meets the criteria for an AI Incident as the AI system's use directly caused harm to persons. The involvement is not speculative or potential but realized harm. Hence, the classification is AI Incident.
Thumbnail Image

3 killed in Xiaomi EV crash while car was in autonomous mode - VnExpress International

2025-04-02
VnExpress International - Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in an AI-assisted driving mode, which qualifies as an AI system. The crash caused fatalities, which is a direct harm to persons. The AI system's failure to prevent the crash or properly manage the handover to the driver is a direct or indirect cause of the harm. The event is not merely a potential hazard or complementary information but a realized incident with serious consequences. Hence, the classification is AI Incident.
Thumbnail Image

Xiaomi self-driving system under fire following fatal SU7 crash

2025-04-02
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an AI system (the autonomous driving feature) that was active at the time of the crash, leading to the deaths of three people. This meets the criteria for an AI Incident because the AI system's use directly or indirectly led to harm to persons. The presence of the AI system is explicit, and the harm is realized, not just potential. The company's cooperation with police and the handing over of system data further confirm the AI system's involvement. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Chine : un accident mortel met en cause une voiture autonome Xiaomi

2025-04-02
Linfo.re
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving system) whose use directly led to fatal injuries, fulfilling the criteria for an AI Incident. The autonomous driving system's malfunction or failure to prevent the crash is central to the harm. The presence of the AI system is explicit, the harm is realized (fatalities), and the AI system's role is pivotal in the incident. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

中国の小米製EVで事故発生、3人死亡と現地メディア-株価5.5%安

2025-04-01
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ADAS) in the Xiaomi EV, which was active and issuing warnings before the accident. The accident resulted in three deaths, constituting injury or harm to persons. The AI system's warnings and the driver's response are central to the event, indicating the AI system's role in the chain of events leading to harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly or indirectly led to harm.
Thumbnail Image

シャオミのEVが死亡事故、運転支援モードで柱に衝突 「警察に協力」

2025-04-01
Newsweek日本版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based intelligent driving assistance system (Navigate on Autopilot) during the accident, which directly led to a fatal collision. The system's malfunction or failure to prevent the collision is implicated, and the harm (death) has occurred. This fits the definition of an AI Incident, as the AI system's use directly led to injury or harm to a person. The involvement of AI in the vehicle's operation and the resulting fatality meet the criteria for classification as an AI Incident.
Thumbnail Image

シャオミのEVが高速道路で事故、3人死亡、株価一時5%超安 -- 中国メディア (2025年4月1日) - エキサイトニュース

2025-04-01
エキサイトニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an advanced driver assistance system with autonomous driving capabilities (NOA). The system was active during the accident and issued warnings, but the collision still happened, leading to three deaths. This is a direct harm to human life caused by the use and malfunction or failure of the AI system. The presence of the AI system and its role in the accident is clear and central. Hence, this is an AI Incident due to direct harm to persons caused by the AI system's operation.
Thumbnail Image

シャオミのEVが高速道路で事故、3人死亡、株価一時5%超安 -- 中国メディア(2025年4月1日)|BIGLOBEニュース

2025-04-01
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The event describes a fatal traffic accident involving Xiaomi's EV equipped with an AI-based advanced driver assistance system operating in an automated driving mode. The AI system's operation and warnings are detailed, and the accident resulted in three deaths, constituting injury or harm to persons. The AI system's role is pivotal as it was controlling the vehicle and issuing warnings prior to the crash. Hence, this is an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

シャオミのEV、高速道事故で3人死亡 運転支援機能で走行中

2025-04-01
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was using a driver assistance function when the crash occurred. Such functions generally involve AI systems that assist with driving tasks. The accident resulted in the death of three people, which is a direct harm to persons. Since the AI system was in use and likely contributed to the circumstances leading to the crash, this event meets the criteria for an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

中国大手IT企業のEV車事故で波紋 大学生3人死亡 メーカーの責任問う声も(2025年4月1日)|BIGLOBEニュース

2025-04-01
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The vehicle was operating in autonomous driving mode, which involves an AI system controlling the vehicle. The accident directly caused the deaths of three people, which is injury or harm to persons. The AI system's delayed alert to switch to manual control and possible failure in safety mechanisms (door opening during fire) indicate malfunction or inadequate performance of the AI system. Therefore, this event is an AI Incident due to direct harm caused by the AI system's use and possible malfunction.
Thumbnail Image

シャオミのEVが死亡事故、運転支援モードで柱に衝突...「警察に協力」

2025-04-02
Newsweek日本版
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an intelligent driving assistance mode with AI components (LiDAR-based detection and autopilot navigation). The fatal collision occurred while the vehicle was operating in this mode, indicating the AI system's involvement in the incident. The harm is a death resulting from the accident, fulfilling the criteria for injury or harm to a person. Xiaomi's cooperation with the police and submission of system data further confirms the AI system's role in the event. Hence, this is classified as an AI Incident.
Thumbnail Image

中国・シャオミ製造のEVが事故 大学生3人死亡|日テレNEWS NNN

2025-04-02
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The Xiaomi EV's AI-based driving assistance system detected roadwork and attempted to warn the driver and reduce speed automatically but failed to prevent the collision with the central divider. This failure directly contributed to the fatal accident resulting in three deaths. The AI system's malfunction or inability to control the vehicle in this critical situation is a direct cause of harm. The event meets the criteria for an AI Incident because the AI system's use and malfunction have directly led to injury and death, which is harm to persons as defined. The involvement of the AI system is explicit and central to the incident, and the harm is realized, not just potential.
Thumbnail Image

女子大学生3人死亡のシャオミEV事故、雷CEOの反応などに高い関心 -- 中国 (2025年4月3日) - エキサイトニュース

2025-04-03
エキサイトニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the ADAS NOA) actively engaged during the accident. The system issued warnings about driver inattentiveness and obstacle detection, but the collision still occurred, resulting in fatalities. This shows the AI system's involvement in the incident, either through malfunction, insufficient intervention, or overreliance by the driver. The harm (death of three persons) is direct and significant. The CEO's response and public discussion are complementary but do not change the classification. Hence, this is an AI Incident as the AI system's use directly led to fatal harm.
Thumbnail Image

中国の高速道路で運転支援モードの電気自動車が防護柵に衝突し、爆発炎上 | カラパイア

2025-04-04
カラパイア
Why's our monitor labelling this an incident or hazard?
The vehicle's AI-based driver assistance system (NOA) was actively controlling the car and detecting obstacles but failed to prevent the collision. The driver attempted to take manual control but was unable to avoid the crash. The AI system's performance and possible malfunction (e.g., delayed obstacle detection, lack of higher-grade sensors) directly contributed to the incident. The resulting crash and fire caused fatalities, and the suspected door lock malfunction further exacerbated harm. This fits the definition of an AI Incident because the AI system's use and possible malfunction directly led to injury and death (harm to persons).
Thumbnail Image

Xiaomi cae un 5,5% en bolsa tras un accidente mortal de su coche SU7

2025-04-01
Business Insider España
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the activation of an advanced driver assistance system (an AI system) in the Xiaomi SU7 electric car shortly before a fatal accident that caused three deaths. The AI system's malfunction or failure to prevent the accident directly led to injury and loss of life, fulfilling the criteria for an AI Incident. The harm is realized and significant, and the AI system's role is pivotal in the chain of events leading to the incident.
Thumbnail Image

El sedán eléctrico de Xiaomi sufre su primer accidente fatal: impacto en la empresa y el mercado | RPP Noticias

2025-04-01
RPP noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (the driving assistance system "Navigate on Autopilot") that was active at the time of the accident. The system's operation and the driver's interaction with it are central to the incident. The accident caused direct harm to human life (three fatalities), fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a discussion of AI technology but a concrete case where AI system use has led to fatal harm. Therefore, it qualifies as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Xiaomi anuncia que coopera con la Policía tras accidente mortal e incendio de su eléctrico

2025-04-01
Airbag
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA assisted driving system) whose use directly led to a fatal accident causing harm to people (three deaths). The AI system's malfunction or failure to prevent the collision, despite detecting an obstacle and requesting driver takeover, is a contributing factor. The harm is realized and significant (fatal injuries). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury and death. The event is not merely a hazard or complementary information, but a clear incident involving AI-related harm.
Thumbnail Image

Xiaomi está en problemas por accidente fatal de su SU7 en modo autopiloto

2025-04-03
Revista Merca2.0
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's autopilot system is an AI system involved in autonomous driving. The accident occurred while the system was active, and despite alerts and partial intervention by the driver, the vehicle crashed fatally. This directly caused injury and death, which is a clear harm to persons. The AI system's failure or malfunction is a direct contributing factor to the incident. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Redmi Turbo 4 Pro Launch Reportedly Delayed After Fatal Xiaomi SU7 Crash ~ My Mobile India

2025-04-07
My Mobile India
Why's our monitor labelling this an incident or hazard?
The event describes a fatal accident involving an AI system (autonomous driving technology with automatic emergency braking) whose malfunction or limitation directly contributed to harm (fatalities). The AI system's failure to recognize an obstacle and prevent the crash is a direct cause of harm to people, fitting the definition of an AI Incident. The investigation and public response are complementary information but do not change the primary classification of the event as an AI Incident.
Thumbnail Image

China calls for smart driving vigilance after fatal Xiaomi crash

2025-04-08
https://www.bangkokpost.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (advanced driver-assistance system with Navigate on Autopilot) whose use directly led to a fatal crash causing injury and death, which fits the definition of an AI Incident. The harm is realized (fatalities), and the AI system's malfunction or failure to handle roadwork obstacles is a contributing factor. The warnings and investigation are responses to this incident but do not change the classification. Therefore, this is an AI Incident.
Thumbnail Image

China calls for smart driving vigilance after fatal Xiaomi crash

2025-04-08
The Star
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric vehicle was operating with its advanced driver-assistance AI system engaged when it crashed into concrete fencing and caught fire, killing three occupants. The AI system's failure to safely navigate roadworks and obstacles directly caused the fatal accident. The involvement of an AI system in causing injury and death meets the criteria for an AI Incident under the OECD framework. The warnings from authorities and social media attention further confirm the AI system's pivotal role in the harm.
Thumbnail Image

Xiaomi EV Crash: System Error Blamed - News Directory 3

2025-04-06
News Directory 3
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-enabled autonomous driving mode when it failed to prevent a collision with a barrier, resulting in a fatal accident with three deaths. The AI system's failure to properly apply brakes and the transition from autonomous to manual mode with insufficient driver response are central to the harm. The vehicle's safety features, including emergency door releases, also failed to prevent harm. This meets the criteria for an AI Incident as the AI system's malfunction and use directly led to injury and death, fulfilling harm to persons under the definitions.
Thumbnail Image

Five key facts about a dramatic Xiaomi EV crash that kills three · TechNode

2025-04-08
TechNode
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as an assisted driving feature with autonomous emergency braking capabilities. The system was in use and failed to prevent a fatal crash, directly leading to injury and death, which qualifies as harm to persons. The driver's distraction and the system's limitations in detecting certain obstacles contributed to the incident. Therefore, this is an AI Incident because the AI system's use and malfunction directly led to significant harm (fatalities).
Thumbnail Image

China: Fatal accident involving Xiaomi SU7 electric vehicle claims three lives, raising concerns over autonomous driving technology and EV safety - Business & Human Rights Resource Centre

2025-04-08
Business & Human Rights Resource Centre
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 electric vehicle involved in the fatal crash includes autonomous driving technology and AI-based safety features like AEB. The failure of the AI system to detect the water barrier obstacle and the unclear operation of emergency door unlocking contributed to the harm and fatalities. The event involves direct harm to persons caused or contributed to by the AI system's malfunction or limitations. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

傳小米因致命車禍 推遲首款SUV上市計劃 - 自由財經

2025-04-23
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the autonomous driving technology in the Xiaomi electric SUV. The fatal accident directly caused harm to people, fulfilling the criteria for an AI Incident. The AI system's malfunction or failure (e.g., inability to prevent the crash or ensure occupant safety) is a contributing factor to the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

SU7致命事故发生后 小米据报推迟发布首款SUV

2025-04-23
早报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the intelligent assisted driving system active during the fatal crash. The crash caused direct injury and death to people, fulfilling the harm criteria for an AI Incident. The AI system's malfunction or failure to adequately assist driving directly led to the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米據報推遲首款電動SUV上市,年度投資大會延至6月舉行

2025-04-23
ET Net
Why's our monitor labelling this an incident or hazard?
The electric SUV likely incorporates AI systems for driving assistance or autonomous features, which are common in modern electric vehicles. The fatal accident involving the vehicle suggests that the AI system's use or malfunction has directly or indirectly led to harm (death), fitting the definition of an AI Incident. The article explicitly links the delay to the fatal accident, indicating realized harm rather than just potential risk.
Thumbnail Image

小米闢謠YU7推遲上市 「依然是雷總預告的今年6至7月」 股價續升逾7%

2025-04-23
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The intelligent assisted driving system is an AI system as it performs autonomous or semi-autonomous driving functions. The fatal crash involving the SU7 vehicle using this system directly caused harm to people (three deaths). Therefore, this event meets the criteria for an AI Incident due to the AI system's malfunction leading to injury and death. The article's denial of YU7 launch delay and stock price movement are complementary details but do not change the classification. The regulatory response mentioned is also complementary information but secondary to the incident. Hence, the primary classification is AI Incident.
Thumbnail Image

专家建议自动驾驶的车也应该考驾照

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunction or misuse led to harm. Instead, it focuses on proposed safety measures and regulatory standards for autonomous driving AI systems to prevent future harm. Therefore, it is best classified as Complementary Information, providing context and governance-related recommendations rather than reporting an AI Incident or Hazard.
Thumbnail Image

北京出台自动驾驶汽车条例

2025-04-03
爱范儿
Why's our monitor labelling this an incident or hazard?
The article discusses the enactment of a legal framework for autonomous vehicles, which are AI systems capable of operating vehicles without human intervention. However, it does not report any harm, malfunction, or misuse of these AI systems, nor does it describe any potential imminent harm. Instead, it provides information about governance and regulatory measures, which is complementary information enhancing understanding of AI ecosystem developments.
Thumbnail Image

小米SU7事故路段提醒慎用智驾 家属等待调查结果

2025-04-05
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving feature of the Xiaomi SU7 vehicle. The use of this AI system directly led to a fatal accident causing injury and death, which fits the definition of an AI Incident due to harm to persons. The article discusses the accident's consequences and ongoing investigation, confirming realized harm rather than potential harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

假期多地高速公路提醒"慎用智驾" 车流量大加强警示

2025-04-06
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (intelligent driving assistance) and their use, but no direct or indirect harm has occurred. The warnings are intended to prevent accidents or issues, indicating a plausible risk but no realized incident. Therefore, this qualifies as an AI Hazard because the use of AI systems could plausibly lead to harm if not used cautiously, but no harm has yet been reported.
Thumbnail Image

卢伟冰回应雷军近况 卢伟冰回复了一个爱心的表情

2025-04-06
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The AI system detected obstacles and attempted to reduce speed, but the collision still occurred, indicating a malfunction or failure in the AI system's operation. This has led to an accident, which constitutes harm to persons or property. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in an event causing harm and ongoing investigation.
Thumbnail Image

官方回应"车主滥用辅助驾驶"事件:仍需手动控制车辆

2025-04-05
中关村在线
Why's our monitor labelling this an incident or hazard?
The assisted driving system qualifies as an AI system under the definition, as it provides automated driving assistance requiring human oversight. The driver's misuse (use of the system as full autonomy) represents use-related risk. Although no actual harm is reported, the event clearly indicates a plausible risk of harm (e.g., accidents) due to misuse of the AI system. Therefore, this event fits the definition of an AI Hazard, as the misuse of the AI system could plausibly lead to injury or harm to persons. The official response and discussion of regulatory and safety issues provide context but do not change the classification.
Thumbnail Image

高速路况复杂,安徽一路段警示勿用智能辅助驾驶

2025-04-05
中关村在线
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driving assistance (Level 2 automation), which are explicitly mentioned. However, the event is about warnings and advisories to drivers to be cautious or avoid using these systems in complex road conditions. There is no report of any actual harm, accident, or malfunction caused by the AI system. The article also references official guidance reinforcing the limitations of current AI driving assistance. This fits the definition of Complementary Information, as it provides supporting context and safety recommendations related to AI system use, without describing a new AI Incident or AI Hazard.
Thumbnail Image

流量营销不能让大众失了生命安全的心智

2025-04-04
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving system, which is an AI-based partial autonomous driving technology. The system was in use at the time of the accident and its malfunction or limitations contributed indirectly to the fatal crash. The harm is direct and severe: loss of human life. The article also discusses the broader context of misleading marketing leading to overreliance on AI driving assistance, which is a contributing factor to the incident. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death, fulfilling the criteria for harm to persons under the AI Incident definition.
Thumbnail Image

小米SU7车祸引发热议:智能驾驶真的安全吗?

2025-04-04
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, namely the intelligent driving assistance system (NOA, AEB, L2+ autonomous driving features) that uses AI for perception, decision-making, and control. The fatal accident is directly linked to the use and possible malfunction or misuse of these AI systems, causing harm to a person (death). The article also discusses systemic issues such as misleading marketing and lack of user awareness, which contribute indirectly to harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction have directly or indirectly led to injury or harm to a person.
Thumbnail Image

网友称安徽高速提醒慎用辅助驾驶!德上高速池州段:确有发布

2025-04-05
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (intelligent driving assistance) and their use on highways. However, the content focuses on warnings and advisories to drivers to avoid using these AI systems in complex or construction-affected road segments. There is no indication that any harm has occurred or that the AI systems malfunctioned or caused incidents. The warnings suggest a plausible risk of harm if AI-assisted driving is used in these conditions, but no actual incident is reported. Therefore, this qualifies as an AI Hazard, as it plausibly could lead to harm if ignored, but no harm has yet materialized.
Thumbnail Image

风波中的智能驾驶:车企营销"变形"如何导致认知混乱及偏差

2025-04-03
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (L2-level intelligent driving assistance system) that was active during a fatal crash causing the death of three people. The AI system detected obstacles but failed to prevent the collision, and the driver had to take over within a very short time frame. The system's emergency braking did not perform adequately, contributing to the accident. The article discusses the AI system's malfunction and the resulting harm (fatalities), which meets the criteria for an AI Incident. The discussion of misleading marketing and regulatory gaps further supports the classification as an incident rather than a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's use and malfunction.
Thumbnail Image

小米事故警示录:谁是吞噬生命的帮凶?

2025-04-03
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Xiaomi's intelligent driving system—which is used in the vehicle. The fatal accident is linked to the AI system's limitations and misleading marketing that led users to overtrust the system. The AI system's malfunction or insufficient capability to detect obstacles and the overreliance by the driver directly or indirectly caused harm (death). This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to injury or harm to a person. The article also discusses systemic issues in AI system marketing and safety education, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

小米SU7爆燃事件后 安徽高速提示:关闭智驾 | 慎用智驾 | 警示牌 | 大纪元

2025-04-05
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article centers on the response to a prior AI-related incident (the Xiaomi SU7 explosion linked to intelligent driving systems) by installing warning signs and advisories on highways. This is a governance and societal response to an AI Incident that presumably occurred earlier. The article itself does not report a new AI Incident or AI Hazard but provides complementary information about measures taken to mitigate risks and raise awareness. Therefore, it fits the definition of Complementary Information, as it updates on responses to a past AI Incident rather than describing a new incident or hazard.
Thumbnail Image

小米原罪,雷军太红了-钛媒体官方网站

2025-04-04
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's NOA intelligent driving system at the time of the crash, which was operating at high speed and failed to prevent the collision with road barriers. The AI system detected obstacles and initiated deceleration but could not avoid the crash. The incident resulted in three fatalities, which is a direct harm to human life caused by the AI system's use and its limitations or malfunction. The involvement of the AI system in the development, use, and malfunction phases is clear. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

安徽高速新增路牌提醒:路况复杂 勿用智能辅助驾驶

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driver assistance systems used on highways. While no actual accident or harm has been reported, the warnings on road signs indicate a credible risk that misuse or overreliance on these AI systems could lead to safety incidents. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has yet occurred or been reported.
Thumbnail Image

博主看完小米SU7爆燃事故有感:小白司机千万别开智驾 你或许根本不会踩刹车

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system—intelligent driving features in vehicles—that has been used and has indirectly contributed to accidents causing harm to people. The article explicitly mentions the limitations of current AI driving assistance (L2-L3), the overtrust by users, and misleading marketing that raises user expectations beyond the system's actual capabilities. These factors have led to real harm (accidents and potential injuries). Hence, it meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to persons.
Thumbnail Image

安徽高速回应改智驾警示语:未接通知 或为路段相关单位调整

2025-04-05
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of intelligent assisted driving (L0 to L2 level) and their use on highways. However, there is no indication that any harm has occurred or that the AI system malfunctioned. The warnings and clarifications are precautionary and informational, emphasizing safe use rather than reporting an incident or hazard. Therefore, this is complementary information providing context and updates about AI system use and safety messaging, not an incident or hazard.
Thumbnail Image

专家:建议将修路信息通过地图推送给每辆智驾汽车

2025-04-03
驱动之家
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous driving (smart driving systems) and their interaction with traffic management data. However, it only suggests improvements and standards to mitigate risks and ensure safety. There is no indication of any realized harm, malfunction, or misuse of AI systems. Therefore, it does not describe an AI Incident or AI Hazard but rather provides complementary information about governance and safety practices related to AI in autonomous vehicles.
Thumbnail Image

安徽高速回应提醒慎用辅助驾驶提示:安全第一

2025-04-06
驱动之家
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent or assisted driving systems. However, it only reports on warnings and recommendations to drivers to use these systems cautiously due to potential safety risks. There is no indication that any harm has occurred or that an incident has taken place. The event is about potential risks and safety advice, which fits the definition of an AI Hazard, as the use or misuse of these AI systems could plausibly lead to harm, but no harm has yet been reported.
Thumbnail Image

国家应急管理部就小米SU7事故发文:目前市售智驾车辆最多只属于L2级

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The article references AI systems in the form of intelligent driving assistance (L2 level) and discusses risks related to their use, including driver overreliance and potential safety hazards. However, it does not describe a new AI incident or hazard itself but rather a government communication responding to a prior accident and emphasizing caution and proper use. Therefore, it fits the definition of Complementary Information as it provides societal and governance response and context to AI-related safety issues without reporting a new harm or plausible future harm event.
Thumbnail Image

男子开启辅助驾驶双手玩手机:被交警拦下警告

2025-04-06
驱动之家
Why's our monitor labelling this an incident or hazard?
The event describes a driver misusing an AI-assisted driving system by not maintaining proper control and attention, which is a misuse of the AI system's intended use. Although no actual harm occurred, the situation creates a credible risk of traffic accidents or injury. The AI system's involvement is clear (assisted driving), and the misuse could plausibly lead to harm, fitting the definition of an AI Hazard rather than an Incident. The police warning and safety education further support that harm was averted but plausible.
Thumbnail Image

上汽大众傅强:安全不是豪华 是底线

2025-04-03
驱动之家
Why's our monitor labelling this an incident or hazard?
The article discusses the development and cautious deployment of an AI-based intelligent driving system (L2+ autonomous driving) by an automaker. While it involves AI system development and use, there is no indication of any harm or incident caused by the AI system. The focus is on safety measures and planned future deployment with safety certification. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about AI system development and governance in the automotive sector.
Thumbnail Image

拿辅助当全自动驾驶!有司机驾车双手离方向盘 都不在座位上:网友称应重罚

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (L2 level assisted driving system) and describes misuse of this system by a driver, leading to dangerous behavior that threatens public safety. The AI system's involvement is in its use, and the misuse has directly led to a significant risk of injury or harm to people, fulfilling the criteria for an AI Incident. The authorities' response and public discussion further confirm the recognition of harm potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

传小米汽车内测"安全分Beta":可评估驾驶行为 降低事故风险

2025-04-02
驱动之家
Why's our monitor labelling this an incident or hazard?
The system explicitly involves an AI system that analyzes driving data to generate safety scores and recommendations. However, the article does not report any actual harm or incidents caused by the AI system. Instead, it describes a development and use of AI aimed at improving safety and reducing risk. There is no indication of malfunction or misuse leading to harm, nor is there a credible risk of harm described. Therefore, this event is best classified as Complementary Information, as it provides context on AI deployment in automotive safety without reporting an incident or hazard.
Thumbnail Image

风暴眼丨智驾狂欢背后的安全赤字:谁在为车企的"技术神话"买单?

2025-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance systems including AEB and NOA) whose malfunction and misleading marketing led to a fatal accident, causing harm to human life. The AI system's failure to detect obstacles and the overstatement of its capabilities directly contributed to the incident. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to injury and harm to people. The article also discusses systemic issues in the industry and regulatory recommendations, but the core event is a realized harm caused by AI system malfunction and misuse.
Thumbnail Image

智驾事故如何担责?曾有辅助驾驶事故车主负全责判例,业内人士称自动驾驶和辅助驾驶概念模糊

2025-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, namely the L2-level intelligent driving (assisted driving) system, which was active at the time of the accident. The system's limitations and the driver's failure to take timely control contributed to the fatal crash, causing harm to people (three deaths). The article also references legal cases where drivers were held responsible for accidents while using such AI-assisted systems, confirming the direct link between AI system use and harm. The discussion of regulatory frameworks and the distinction between assisted and autonomous driving further supports the classification. Since actual harm occurred and the AI system's involvement was a contributing factor, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

安徽高速提醒慎用智驾,多家车企客服回应

2025-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article mentions intelligent driving assistance systems, which are AI systems, and warnings to use them cautiously on highways. No actual injury, accident, or harm is reported, only precautionary signage and advice. This fits the definition of an AI Hazard, as the use or misuse of these AI systems could plausibly lead to harm, but no incident has occurred yet.
Thumbnail Image

慎用智驾!多地高速打出标语

2025-04-05
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of intelligent driving assistance (assisted driving) that have been linked to a fatal accident, indicating harm to persons. The warnings and signage reflect a response to the realized harm and the potential for further incidents if such systems are misused or overrelied upon. Since the accident has already occurred causing fatalities, this qualifies as an AI Incident due to the direct or indirect role of the AI system in the harm. The article focuses on the incident and the resulting safety warnings rather than just general information or future risks, so it is not merely complementary information or a hazard.
Thumbnail Image

3秒生死线:小米SU7事故背后的人机共驾"死亡灰区"

2025-04-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the Xiaomi SU7's intelligent driving system, including NOA) that was in use at the time of the accident. The AI system's malfunction or design limitations (short warning and takeover time) directly contributed to the fatal crash, causing harm to the driver and vehicle. The human-machine interaction failure and the AI system's inability to provide sufficient time for safe takeover are central to the incident. This meets the definition of an AI Incident because the AI system's use directly led to injury and harm to a person. The article does not merely discuss potential risks or general AI governance but reports on a concrete accident with realized harm linked to AI system use.
Thumbnail Image

不该让她们独自承受小米SU7车祸的悲剧

2025-04-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (intelligent driving assistance, L2 level) in the Xiaomi SU7 vehicle. The accident resulted in three deaths, which is a direct harm to persons. The AI system's limitations and the timing of human takeover requests are described as contributing factors to the accident. Although the article does not assign legal responsibility, it clearly links the AI system's use and its limitations to the fatal incident. Therefore, this qualifies as an AI Incident due to indirect causation of harm through AI system use and malfunction.
Thumbnail Image

不要让小米SU7爆燃烧了智驾的未来

2025-04-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 intelligent driving system was involved in a high-speed collision and subsequent fire causing fatalities, which is a direct harm to human life. The article explicitly connects the accident to the intelligent driving technology, discusses the public's loss of trust, and the need for improvements in AI algorithms and hardware to prevent such tragedies. This fits the definition of an AI Incident where the AI system's use or malfunction has directly led to injury or harm to people. The article also references similar incidents with other autonomous driving systems, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7事故后,不完美的智驾仍在争议风口

2025-04-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (intelligent driving assistance at L2 level) and describes an accident caused by the driver's misuse of this system, resulting in harm (vehicle damage and risk to safety). The AI system's malfunction is not indicated; rather, the harm stems from the driver's overreliance and misuse of the AI system. This fits the definition of an AI Incident because the AI system's use directly and indirectly led to harm through the driver's behavior. The article also discusses broader societal and regulatory responses, but the primary focus is the incident itself.
Thumbnail Image

揭开智驾的魅惑面纱:生涩的技术与狂热的生意

2025-04-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the intelligent driving system NOA) that was active and failed to prevent a fatal collision, directly leading to the deaths of three individuals. The malfunction of the AI system in timely hazard detection and response, combined with safety system failures (electronic door locks), caused injury and death. This meets the definition of an AI Incident as the AI system's use and malfunction directly led to harm to persons. The article also discusses systemic issues in AI safety and marketing, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

"智驾"不是自动驾驶,司机、车企都不能含糊

2025-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems in the form of assisted driving technologies (L2 level) that provide partial automation but require driver attention. The misuse of these systems, such as drivers sleeping or removing hands from the wheel, has directly led to dangerous situations and potential harm, fulfilling the criteria for an AI Incident. The harm is indirect but real, as the AI system's limitations and the misunderstanding of its capabilities contribute to unsafe behavior and increased risk of accidents. The article calls for clearer communication and responsibility, underscoring the incident nature of the problem rather than a mere hazard or complementary information.
Thumbnail Image

安徽高速新增提醒:路况复杂,勿用智能辅助驾驶!

2025-04-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions intelligent driving assistance systems (AI systems) and their cautious use due to complex road conditions, indicating AI system involvement. The serious accident causing fatalities directly relates to the use or malfunction of such an AI system, fulfilling the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm (death of persons). The ongoing investigation and family interactions further support the incident's significance. Therefore, this is classified as an AI Incident.
Thumbnail Image

重走SU7事故路段 电子屏提醒"慎用智驾"

2025-04-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving assistance (NOA) in the Xiaomi SU7 vehicle. The system's use and malfunction (or limitations) directly led to a fatal accident causing harm to persons (three deaths). The article provides detailed evidence of the AI system's warnings, driver reactions, and the short time frame before the collision, indicating the AI system's pivotal role in the incident. The harm is realized and significant, meeting the criteria for an AI Incident. The article also discusses broader implications and responses, but the primary focus is the fatal accident caused by the AI system's involvement.
Thumbnail Image

评论 1

2025-04-03
guancha.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involves the use of intelligent driving (AI system) technology, and the article implies that the accident is a direct consequence of premature reliance on AI for driving tasks. This indicates harm to people (consumers) due to the AI system's malfunction or misuse. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the article calls for a reassessment of the industry's approach to AI in smart driving.
Thumbnail Image

安徽、江苏等多地高速上突然出现:提醒"慎用辅助驾驶"......回应来了!

2025-04-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (assisted driving systems) and their use on highways. However, the article only reports warnings and reminders to drivers to be cautious or avoid using these systems in certain conditions. There is no indication that any harm has occurred due to the AI systems, nor that any incident has taken place. The warnings are preventive, aiming to reduce potential risks. Therefore, this qualifies as an AI Hazard, since the use of AI-assisted driving systems could plausibly lead to harm in complex or construction road conditions, but no harm has been reported yet.
Thumbnail Image

【图】中国智能驾驶商业化发展白皮书(2025)_汽车之家

2025-04-03
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of intelligent driving technologies (L2+, L4 autonomous driving, vehicle-road-cloud collaboration) and their commercial and societal impacts. However, it does not report any direct or indirect harm caused by these AI systems, nor does it describe any event where AI malfunction or misuse led to injury, rights violations, or other harms. It also does not present a credible risk of future harm from these systems beyond general challenges and uncertainties typical of emerging technologies. Instead, it provides detailed analysis, user survey data, policy recommendations, and industry trends, which fit the definition of Complementary Information as it enhances understanding of AI developments and their ecosystem without reporting new incidents or hazards.
Thumbnail Image

安徽高速提醒慎用辅助驾驶 安全第一

2025-04-06
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent assisted driving features. However, the event is about official warnings and advisories to drivers to use these systems cautiously or avoid them in certain conditions to ensure safety. No actual harm or incident has occurred, and the warnings are preventive. Therefore, this qualifies as an AI Hazard because the use or malfunction of the AI system could plausibly lead to harm, but no harm has yet been reported or confirmed. It is not Complementary Information because the main focus is on the potential risk and caution, not on responses to a past incident or ecosystem updates.
Thumbnail Image

智驾三分天下!华为、Momenta、自研车企的创新博弈

2025-04-03
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and deployment of AI-based intelligent driving technologies by Huawei, Momenta, and self-developed car companies. It highlights cooperation models, technical capabilities, and market strategies but does not describe any realized harm or direct/indirect incidents caused by AI systems. There is also no indication of plausible future harm or credible risk from these AI systems as presented. Therefore, the content is best classified as Complementary Information, providing context and updates on AI systems and their ecosystem without reporting an AI Incident or AI Hazard.
Thumbnail Image

用户需理性看待"智能驾驶" 方向盘在自己手里

2025-04-03
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems involved in intelligent driving (sensor and algorithm-based perception and decision-making) that have malfunctioned or been misused, causing accidents or near-accidents. These incidents have resulted in harm or posed credible risks to human safety, fulfilling the criteria for an AI Incident. The discussion of specific accidents and expert warnings about system immaturity and high failure rates confirms direct or indirect harm linked to AI system use. Hence, this is not merely a hazard or complementary information but an AI Incident due to realized or ongoing harm.
Thumbnail Image

女子开着刚提的小米汽车撞上护栏 智驾安全引热议

2025-04-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi vehicle was operating in an AI-assisted driving mode (NOA), which is an AI system capable of autonomous control including acceleration, deceleration, and lane changes. The accident occurred while the AI system was engaged, and the driver took control only shortly before the collision. The AI system's operation and the timing of driver intervention are directly linked to the fatal crash, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm (fatal injuries). The event involves an AI system, the harm is realized (fatalities), and the AI system's malfunction or limitations in this context contributed to the incident.
Thumbnail Image

小米事故背后:更多智驾车型等待上路-新闻频道-和讯网

2025-04-03
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as NOA intelligent driving, an advanced driver-assistance system that uses AI to control vehicle functions. The accident caused direct harm to human life (three deaths), fulfilling the criteria for an AI Incident. The article details how the AI system was active during the accident and how its limitations and overreliance contributed to the harm. Although driver responsibility is emphasized, the AI system's role in the incident is pivotal. Therefore, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

专家建议自动驾驶的车也应该考驾照:打造虚拟仿真测试场,也有真正路测测试场,里面包含夜间、暴雨等场景-汽车频道-和讯网

2025-04-03
和讯网
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm or incident caused by AI systems, nor does it describe a specific event where an AI system malfunctioned or caused injury or rights violations. Instead, it focuses on expert advice and proposals for safety testing and regulatory standards for autonomous vehicles equipped with AI driving systems. This is a societal/governance response to potential AI risks, aiming to improve safety and prevent future incidents. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

安徽高速新增提醒:路况复杂,勿用智能辅助驾驶!

2025-04-04
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The intelligent driving assistance system qualifies as an AI system due to its autonomous or semi-autonomous driving capabilities. The accident causing three deaths is a direct harm linked to the use of this AI system, fulfilling the criteria for an AI Incident. The warnings on electronic signs reflect concerns about the AI system's safe use but do not negate the occurrence of harm. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's involvement in the fatal accident.
Thumbnail Image

热评丨小米SU7事故敲响警钟:智驾不宜过度宣传 2025-04-03 18:05

2025-04-03
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an intelligent driving (autonomous driving assistance) system, whose use has directly led to a fatal accident causing harm to people (three deaths). The article explicitly links the accident to the limitations and misuse of the AI system, making it an AI Incident. It also discusses broader systemic issues and calls for regulatory responses, but the primary focus is on the realized harm caused by the AI system's use and misuse.
Thumbnail Image

紫牛热点︱高速新增路牌提醒"勿用智驾"!交警:别把智驾当成使用手机的借口

2025-04-04
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of intelligent driving assistance (L2 level) that control acceleration, deceleration, and steering under certain conditions. The misuse of these AI systems by drivers (e.g., using phones while the system is active, leading to accidents) has directly caused harm (traffic collisions and safety risks). Therefore, this qualifies as an AI Incident because the AI system's use and misuse have directly led to injury risk and traffic accidents, which are harms to persons and property. The article documents realized harm and legal consequences, not just potential risk or general information, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

多地高速提醒"慎用智能辅助驾驶"新快报综合2025-4-5

2025-04-05
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driving assistance (AI-assisted driving). The event stems from the use of these AI systems and the authorities' advisories to limit or avoid their use in certain conditions to ensure safety. However, no actual harm or incident has been reported; the warnings indicate a plausible risk of harm if the systems are used improperly or in complex road conditions. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has yet occurred or been reported.
Thumbnail Image

智能驾驶≠自动驾驶!生命岂能托付冷机器_辣言辣语_红辣椒评论

2025-04-03
红网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was active during the accident. The AI system detected obstacles and initiated safety measures but required human intervention to avoid the crash. The driver's failure to timely take control led to the collision and fatalities. This constitutes an AI Incident because the AI system's use and limitations directly contributed to injury and death. The article also discusses the broader implications of misunderstanding AI driving assistance, but the core event is a realized harm caused indirectly by AI system use and human interaction with it.
Thumbnail Image

夺命智驾?小米SU7事故背后的六大谜团__新快网

2025-04-03
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (the Xiaomi SU7's intelligent driving system operating in NOA mode) whose use directly led to a fatal traffic accident. The article details how the AI system's warnings and deceleration occurred shortly before the collision, and the driver had limited time to react. The harm (deaths and injuries) has occurred, fulfilling the criteria for an AI Incident. The article also discusses systemic risks and safety concerns related to AI-assisted driving, but the primary classification is AI Incident due to realized harm linked to AI system use.
Thumbnail Image

中国多地高速公路据报新增小心使用智驾提醒

2025-04-05
早报
Why's our monitor labelling this an incident or hazard?
The event explicitly involves intelligent driving assistance systems, which are AI systems designed to assist vehicle operation. The fatal accident involving a Xiaomi electric vehicle, with suspicions of sudden loss of control linked to the AI system, constitutes harm to persons (fatalities). The warnings and reminders on highways are a response to this incident, indicating the AI system's role in the harm. The AI system's use and possible malfunction have directly or indirectly led to injury and death, fulfilling the criteria for an AI Incident. The event is not merely a hazard or complementary information but reports on actual harm caused by AI system involvement.
Thumbnail Image

老萧杂说|审判雷军

2025-04-03
China Digital Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system—specifically, the autonomous driving system in the Xiaomi SU7 vehicle. The accident caused direct harm (three deaths), and the article links this harm to the use and possible malfunction or limitations of the AI system. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to injury and harm to persons. The article also discusses broader societal and ideological implications but the core event is a fatal accident involving an AI system causing harm.
Thumbnail Image

智能驾驶不是自动驾驶 市场宣传应保持谨慎

2025-04-03
杭州网
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the intelligent driving system uses AI-based sensor fusion and decision-making for assisted driving. The reported sudden braking events indicate a malfunction or unexpected behavior of the AI system during use. Although no injury or accident occurred, the misleading marketing and user overreliance create a credible risk of harm. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm if users misunderstand its capabilities and fail to supervise properly. There is no indication of realized harm or violation of rights yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it highlights potential safety risks tied to AI system use.
Thumbnail Image

高速新增路牌"勿用智驾" 交警:别把智驾当成使用手机的借口

2025-04-05
华龙网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of intelligent driving assistance (L2 level). The incidents described show drivers misusing these AI systems by disengaging from active control and using phones, resulting in accidents and traffic violations. The AI system's role is pivotal as it provides partial automation that drivers overtrust, leading to harm (accidents, legal penalties). This meets the criteria for an AI Incident because the AI system's use directly contributed to harm to persons and public safety. The article also includes warnings and law enforcement responses, but the primary focus is on realized harm from AI system misuse.
Thumbnail Image

小米事故警示录:谁是吞噬生命的帮凶?-证券之星

2025-04-03
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Xiaomi's intelligent driving system—which is designed to assist or partially automate driving. The fatal accident is linked to the AI system's inability to detect certain obstacles and the misleading marketing that led users to overtrust the system. This constitutes an AI Incident because the AI system's use and limitations have directly or indirectly caused harm to a person (death). The article also discusses systemic issues such as inadequate safety education and regulatory oversight, but the core harm is the fatal accident involving the AI system's malfunction or misuse.
Thumbnail Image

包括小米SU7事故路段,多地高速突然出现!最新回应......

2025-04-06
China News
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction or misuse of an AI system—specifically, intelligent driving assistance technology—in a real-world scenario that resulted in a fatal accident causing harm to people. The article explicitly links the accident to the use of such AI systems and discusses the subsequent warnings and responses from authorities and experts. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (fatalities) and ongoing safety risks. The warnings and signage are responses to the incident, but the core event is the accident caused by or related to the AI system's use.
Thumbnail Image

男子开启辅助驾驶双手玩手机:被交警拦下警告

2025-04-06
证券之星
Why's our monitor labelling this an incident or hazard?
The assisted driving system qualifies as an AI system because it provides partial automation to assist driving. The driver's misuse (taking hands off the wheel to use a phone) while the system is active creates a plausible risk of harm (traffic accidents). However, no actual injury, accident, or violation causing harm occurred in this incident; the police issued only a warning. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to harm due to misuse of the AI system but has not yet caused harm.
Thumbnail Image

安徽高速回应改智驾警示语:未接通知 或为路段相关单位调整

2025-04-05
证券之星
Why's our monitor labelling this an incident or hazard?
The article discusses warnings related to intelligent assisted driving systems (AI systems) and clarifies that these systems are only at L2 level automation, serving as assistance rather than full autonomy. The changes in warning messages on highway displays and official statements are informational and precautionary, with no indication of any realized harm or incident. Therefore, this is complementary information that enhances understanding of AI system use and safety messaging, rather than reporting an AI incident or hazard.
Thumbnail Image

近日,小米汽车事故引发行业对于智驾安全的讨论。2023年3月,王传福在比亚迪投资者沟通会上曾直言,自动驾驶是一个被资本裹挟和炒作的概念,最终只能实现高级辅助驾驶。他指出,自动驾驶技术一旦发生车祸,可能会对品牌和车型造成毁灭性打击,因为公众对自动驾驶的信任度会因此崩塌。王传福还提到,自动驾驶事故的责任归属问题尚未明确,车企、供应商和政府都不愿承担责任,最终消费者可能成为实际的责任人。此外,他批评自动驾驶技术存在局限性,无法覆盖所有复杂的路况和场景,现阶段难以实现真正的无人化。

2025-04-03
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—autonomous driving technology—that has directly led to a car accident, which is a harm to persons and property. The discussion about responsibility and safety limitations further supports that the AI system's malfunction or limitations contributed to the incident. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm and industry concern about safety.
Thumbnail Image

安徽高速新增路牌提醒:路况复杂 勿用智能辅助驾驶

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of intelligent assisted driving systems, which are explicitly mentioned. The event concerns the use of these AI systems and their limitations, with warnings to drivers to avoid overreliance. There is no report of any injury, accident, or violation caused by the AI systems, only a potential risk if misused. Therefore, this is not an AI Incident (no realized harm) nor an AI Hazard (no specific plausible future harm event described). Instead, it is complementary information about societal and governance responses (road signage and warnings) to known AI system limitations and risks, aimed at improving safety and awareness.
Thumbnail Image

安徽高速回应提醒慎用辅助驾驶提示:安全第一

2025-04-06
证券之星
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent or assisted driving systems. The authorities' warnings indicate a plausible risk of harm if these systems are used improperly or over-relied upon, especially in complex or construction zones. However, no actual incident or harm has been reported. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where the use of AI systems could plausibly lead to harm, but no harm has yet occurred or been documented.
Thumbnail Image

小米SU7碰撞起火事故,带走了3条年轻的生命。小米汽车官方给出的行车数据说明,车辆在事故发生前的一段时间处于NOA(领航辅助驾驶)状态。各方都在等待最终的事故责任认定,但智能驾驶仍然成为这几天的热议话题。没有人愿意看到悲剧的一幕。在哀悼逝者的同时,我们认为,是该认真反思我们走过的智能驾驶之路了。严格地说,目前乘用车上使用的功能叫做辅助驾驶。它一度被车企冠以"自动驾驶"的名号,但之后工信部明令禁止在乘用车上使用 "自动驾驶""无人驾驶"的表述,智能驾驶就成了汽车企业最新的宣传用语。辅助驾驶的初衷是为了辅助人类驾驶员开车,减少因疲劳驾驶、分心导致的事故概率,而不是将驾驶员取而代之。当前,所有的辅助驾驶技术都被严格限制在L3级别之下,其要义是人手不能脱离方向盘,以保证操作汽车的是驾驶员而不是机器。但是这些规则在现实中走了样。在汽车厂家的话术中,智能驾驶可"解放双手双脚",以后还要"解放双眼"。这极大弱化了驾驶员对汽车的掌控权。在强力的营销包装之下,智能驾驶不仅成了代表汽车技术先进与否的标志,还成为消费者购车时的一个重要参考因素。"智驾平权""全民智驾"的提法,更是让智能驾驶走进更多人的视野。与此有关,很多消费者显然缺乏对智能驾驶的正确认知。在真正的智能驾驶还不能照进现实时,营销话术的误导和消费者的误会都可能是致命的。当技术本身还在迭代进化时,过早地将风险交给消费者并不符合消费者和企业的利益,也不利于社会的福祉。小米汽车发生的事故并非第一例与智能驾驶有关的事故。但舆论反应之强烈说明,公众认知正在发生变化。这是我们重新客观、理性地认识智能驾驶的机会。按照国际上对自动驾驶技术的定义和分级,自动驾驶需要融合道路交通、基础设施、通讯、软件、算力等多个行业,在"车 -- 路 -- 云"一体化的理想环境下实现,这是全社会参与的系统工程。现在,单车智能变成了自动驾驶的主流,这很可能存在安全隐患。在实现单车智能的技术上,又存在着摄像头纯视觉方案和摄像头、雷达多传感器融合方案的分歧。从多个层面看,当前的智能驾驶技术都称不上成熟,存在很多不确定性。同时,智能驾驶还涉及伦理、法规、权责等问题。比如,当智能驾驶车辆发生事故后,如何判定责任在厂家还是在驾驶员?从目前看,还没有任何一个厂家因为智能驾驶事故而担责,似乎多种力量都将责任推向了驾驶员一方。这对驾驶员不公平,更可能鼓励厂商采用激进的技术策略。我们以为,问题的核心是技术让步于人还是人让步于技术。智能驾驶技术的运用和发展不能脱离以人为本,这要求技术的推广和应用必须保证基本的安全边界 -- -- "安全是最大的豪华"不是说说而已的,消费者理应获得智驾功能的详细使用说明或告知,汽车厂家和相关社会机构应该承担起针对智能驾驶培训的义务,智能汽车强制性法律法规也需要逐步完善。有人说,人类总要为创新付出代价。如果太过于强调规制,也会捆住创新的手脚。我们认为,与其他领域相比,汽车领域的创新因为直接关系到人的生命安全,理当更审慎甚至"保守"一些。更何况,我们付出的代价常常与创新本身无关,多半因为敬畏之心的缺乏。回过头看,创新重塑了中国汽车产业,中国成为电动车产业的领跑者。人们对智能驾驶的现实和未来充满期待和想象,这同样激励了无数的创新创业者。我们乐见中国企业走在前列。即使如此,还是想说:我们是否可以慢一些?重新审视我们走过的智能驾驶之路,矫正方向和确定路标,中国企业不会因此失去什么,智能驾驶之路却能更加稳健扎实。

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved a vehicle operating in an AI-assisted driving mode (NOA), which is an AI system designed to assist human drivers. The accident caused three fatalities, which is a direct harm to human life. The article explicitly links the accident to the AI system's operation and discusses the broader implications of AI-assisted driving safety, regulatory challenges, and ethical concerns. Given the direct causal link between the AI system's use and the fatal harm, this event meets the criteria for an AI Incident. The article also reflects on the misuse or overreliance on AI driving assistance and the need for better safety and legal frameworks, reinforcing the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

智驾需驾驶员持续监管驾驶状态多位电车销售称智驾不等于自动驾驶

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report a plausible future harm event. It mainly provides information about the current capabilities and legal responsibilities related to AI-assisted driving systems, which is educational and contextual. Therefore, it fits the category of Complementary Information as it enhances understanding of AI system use and governance without reporting an incident or hazard.
Thumbnail Image

3月29日晚上,发生在安徽铜陵高速上的一场惨烈的小米SU7电动车碰撞起火事故,让三名女大学生的生命定格在那一刻。其中一名遇难者的家属在悲痛之余通过微博发文质疑:车子为什么会自燃,为什么车锁打不开。她的潜台词是,如果车辆没有出现这些问题,她的女儿是不是就不会丧生。随着此次事故被更多的人看到,以及小米汽车官方在4月1日公布的行车数据出炉,公众发现一个惊人的细节:当事车辆小米SU7标准版,在事故发生前的一段时间内是处于高速NOA(领航辅助驾驶系统,业界俗称"智能驾驶")状态,时速平均约100公里/小时,直到碰撞事故发生前2秒,该智能驾驶系统才发出了预警提示,在1秒后又退出变为驾驶员接管,然而此时事故已无可避免。由此,众多的质疑指向了小米汽车的智能驾驶。众人疑惑,作为明星公司小米造出的同样是明星产品的小米SU7,其质量及智能驾驶为什么并不像想象中那样,至少没有像官方宣传的那样强大?小米集团创始人、董事长兼CEO雷军曾公开表示,每一辆小米SU7都是端到端的高阶智驾。近几年,智能驾驶一词以极高的频率出现在各大汽车厂家的官方宣传和媒体的报道中。进入2025年以来,"智驾元年""高阶智驾""智驾平权""全民智驾"等说法,伴随着汽车厂的宣传,铺天盖地进入公众的视线。更有车企负责人放言,未来没有搭载智驾的车,将不会成为消费者的选择。在汽车厂家的包装下,智能驾驶已演变成了汽车产品中一个"决定性"的卖点。然而广大的消费者并不十分清楚,这一项功能真正意味着什么。从技术内核、法规规定的角度,智能驾驶是指汽车辅助驾驶功能,是用来辅助汽车驾驶员开车的功能,驾驶员一直且必须是方向盘的掌控者,而不是机器。如今一场重大的事故,再度引发人们对于智能驾驶的思考和重新认识。不少汽车行业人士指出:此次小米SU7事故,是对高歌猛进的智能驾驶的当头一棒。一向有着"网红企业家"形象的雷军,在事故发生后"沉默"了两三天,于4月1日晚发出一条微博称:"等到此时,我觉得我不应该再等了,我必须站出来,代表小米承诺:无论发生什么,小米都不会回避,我们将持续配合警方调查,跟进事情处理的进展,并尽最大努力回应家属和社会关心的问题。""安全是最大的豪华",这同样是各大车企常用的一个口号,但显然这并没有成为现实。中国新能源汽车发展十多年,已积累起体系化的技术和产业资源优势,当下的电动智能汽车也被寄予"汽车强国"的希望。但作为万千居民生活中的交通工具,汽车更大的责任在于保护民生安全。是时候理性审视智能驾驶,使其重拾安全底线了。三道"夺命符":火、时间和打不开的门关于小米SU7事故,公众的三大疑惑在于:为什么汽车会发生碰撞?为什么碰撞后会起火?起火后三个女生为什么没能打开车门逃生?这三个问题分别涉及小米SU7的高阶智驾、动力电池、车门应急锁的功能和安全性。关于车门为什么没能打开,天津大学机械工程学院教授姚春德向经济观察报表示:"可能是开门的电控系统在事故发生时失效了,有不少电动车也发生过类似情况。"去年11月,小米公司委托权威机构做了两辆小米SU7的碰撞测试,在60公里每小时的对撞后,电池包未受到挤压,所有车窗均可正常降下,所有车门可正常开启。但此次三个女大学生并没有这么幸运。小米公司向警方提交的车辆行驶数据及系统运行信息显示,3月29日22时44分车辆发生碰撞,随即车端E-CALL接通,同时紧急救援呼叫车内乘客,并报警呼叫120急救。目前尚不能判断紧急救援呼叫后,车内乘客有无应答,驾乘人员有无意识。关于三位女大学生为何没有逃出失火车辆的具体原因,还有待警方公布进一步信息。小米SU7的车门采用电子解锁。根据国家标准《机动车运行安全技术条件》(GB7258-2017)的规定,车辆在发生碰撞时,车门应具备自动解锁功能,以便乘员能够快速撤离。小米SU7也提供了应急机械解锁,但其机械把手位于车门储物箱内,且把手仅拇指大小。对此,姚春德表示:"应急手动开关设计在这个地方,可能因遇难驾驶者并非车主本人,不熟悉车辆,在慌乱中无法找到开关。"事实上,当下的智能汽车为追求"科技感",已大量取消了双闪快捷按键、音量键、挡风玻璃一键除雾键等物理按键,也因此备受消费者诟病。已有一些车企意识到了这方面的问题。例如大众汽车近期表示,之前取消物理按键是一个错误,基于用户反馈和安全考虑,决定在未来车型中回归物理按键?。小米SU7碰撞后起火,同样引发了诸多关注。小米汽车曾称,自研了电芯倒置技术,该技术在极端情况下会快速向下释放能量,最大程度保证乘员舱安全。但此次事故中的小米SU7标准版本车型,并没有配备电芯倒置技术。根据小米汽车官方信息,小米SU7标准版本车型的电芯由比亚迪(弗迪电池)、宁德时代分别供货。宁德时代在2022年发布了第三代CTP(CelltoPack)技术的麒麟电池,采用了电芯倒置设计,适配车型包括极氪009、问界M9等。关于小米SU7标准版是否使用了此电池的电芯,宁德时代在4月2日回复:"不是我们的电池。"前欣旺达系统研究院总工程师郑伟伟向经济观察报表示:"这跟是不是比亚迪的电池没有关系,没有任何一家电池企业能保证,97/h的车速碰撞刚性体一定不起火。"2024年11月,国家消防救援局相关负责人表示:"新能源火灾有两个突出矛盾,第一,锂电池的热失控不可避免;第二,灭火救援的难题还没有有效解决。遇到新能源汽车失火或冒烟,一定要及时逃生。"GB38031《电动汽车用动力蓄电池安全要求》要求,电池包在单个电芯热失控后,在接下来的5分钟内确保不会对乘客舱构成危险。"GB38031在2026年新版中要求,电芯发生热失控的情况下整车不允许起火爆炸,这也是通过众多血淋淋的案例推动行业良性发展。"奥创科技总经理、前蜂巢能源副总裁尹东星向经济观察报表示。2025年1月,工信部公开征求强制性国家标准《电动汽车用动力蓄电池安全要求(报批稿)》意见,其中对电池单体安全要求包括过充、过放、外部短路、加热、挤压等试验下,应不起火、不爆炸;对电池包或系统安全要求包括振动、机械冲击、模拟碰撞、挤压、进水等实验下,应无泄露、外壳破裂、起火或爆炸现象。新的GB38031标准自2026年7月1日起实施。"新国标要求的'不起火不爆炸'只是在试验状态下,真正遇到高速碰撞,电池还是会起火。"郑伟伟表示,主要问题是电池外部短路后无法阻止起火。"这些必须让消费者有清晰地认知,我现在做安全防护材料及解决方案,也一直在呼吁行业重视热蔓延阻断。"尹东星表示,"但目前整个行业还是成本导向。"除了车门机械开锁与电池安全,小米SU7车辆在NOA状态下未能对事故给予及时的反应,引发了更多的质疑,智驾是造成事故的一个核心元素。小米汽车已于31日晚依法向警方提交完成所掌握的车辆行驶数据及系统运行信息,其中显示,从22时44分24秒NOA发出风险提示"请注意前方有障碍"开始,到22时44分26秒 -- 28秒之间车辆与水泥护栏发生碰撞,中间仅有约2秒钟,也即驾驶员仅有2秒反应时间。德国全德汽车俱乐部的测试结果表明,驾驶员平均需要2.3秒才能完成有效接管,高速公路场景下这一时间甚至延长至2.6秒。我国的《智能网联汽车自动驾驶系统通用技术要求》(GBTT44721-2024),(驾驶员)介入请求从发出到因执行最小风险策略(MRM)而终止的时长应不小于10秒,使驾驶员有充足的时间接管车辆。该国标并非强制性技术要求,但在汽车行业中通行。雷军此前在介绍小米SU7产品时提到,该车在主动安全极限测试中,在135km/h时速下成功实现AEB自动紧急制动,并同时通过夜间120km/h静止故障车、100km/h消失的前车两大项目。此次事故中的情况符合"夜间120km/h静止故障"的条件,但显然并未成功实现AEB自动紧急制动。对此,小米汽车官方回复称,AEB功能工作速度在8 -- 135km/h之间。这一功能和行业同配置的AEB功能类似,目前不响应锥桶、水马、石头、动物等障碍物。被"神话"的高阶智驾此次事故中的小米SU7无法识别锥桶等障碍物,导致对碰撞事故反应迟钝。而如果能识别施工路段的限速牌,也可提前避免事故的发生。"即便是在120km/的时速下,因为限速牌一般都是高反牌,开灯的情况下,摄像头也能识别到,就会主动做减速动作。"国内一家车企的研发总监表示。但事故发生时,道路上有没有按规定设置警示标志,目前还无法准确得知。可以确定的一点是,此前小米汽车在其智驾的"全场景守护"宣传中强调了"施工避让功能",这让不少缺乏专业知识的用户对其使用范围产生了误解。过度宣传产品的某一项功能,同时不强调甚至选择性忽视该功能使用的前提条件,此类营销手段已被大多数车企采用,汽车业内将此戏称为"高配演示、标配交付"。"各种浮夸的宣传将智能汽车变成了电子快消品,引发了年轻人的盲目追随与崇拜,不少首次使用新功能的年轻用户群体因此受到伤害。"某车企营销负责人向经济观察报表示,众多车企冗长的条款让消费者无法分辨智驾的责任边界,加之这些群体阅历偏少,容易被洗脑。实际上,早在2021年有关部门就发布了《汽车驾驶自动化分级》的标准,明确了驾驶自动化等级和各级别下驾驶员的责任边界。当前的智能驾驶均属于L2级别,驾驶员是汽车的掌控者,也是第一责任人。但在过去几年时间里,"智驾"作为新的价值标签,被一些车企滥用,从"解放双手"到"零接管",从"全场景守护"到"比人类更安全",从"进入L3状态就可以在车里睡觉"到"都能搞定",这些充满科技魅力的宣传词,极大地模糊了驾驶员的重要性,让人误以为机器可以取代驾驶员。当车企对智能驾驶潜在的致命风险含糊其辞时,不明就里的消费者正用生命的代价为其买单。与此同时,目前尚未有任何车企因为不恰当的宣传而受到处罚。理想汽车CEO李想曾发文称:"呼吁媒体和行业机构统一自动驾驶的中文名词的标准,一个多余的中文字也不要有,避免夸张的宣传造成用户使用的误解。在推广上克制,在技术上投入,对用户、行业、企业都长期有利。"汽车界有评论称,此次小米SU7事故,或将是中国电动智能汽车甚至中国汽车产业发展史上的一个分水岭事件。更有业内人士发出了灵魂拷问:部分车企尤其是新势力车企领导,既然享受了巨大的流量红利,是否也应该承担起更多的责任,构建"流量-责任-信任"的闭环,毕竟很多人对智驾并不信任,而信任一旦丢失,重建成本将远超技术投入。受小米SU7重大交通事故影响,小米集团(01810.HK)股价在4月1日至4月2日两个交易日内累计下跌9.45%,市值从1.34万亿港元缩水至1.15万亿港元,蒸发约1200亿港元。智驾普及的快与慢此次事故中的小米SU7标准版车型,官方售价21.59万元,在智驾感知探测配置上,搭载了1个毫米波雷达和12个超声波雷达,其他高配版本车型(官方售价24.59万元起)上,则有更多数量的毫米波雷达以及激光雷达。这再度引发了智能驾驶领域的一个固有争议:就智能驾驶的探测环境,究竟是纯视觉方案更好还是视觉与激光雷达融合的方案更优。激光雷达公司北京摩尔芯光副总经理王建胜向经济观察报表示:"如果有激光雷达,就可以提前识别到障碍物,也就提前减速了。这个(指此次事故)就是因为是识别出来太晚了,来不及反应了。"小米SU7标准版采用纯视觉技术路线、未配置激光雷达,而高配车型配备激光雷达,这在汽车行业中并非个案。今年来,比亚迪、吉利、奇瑞等多家车企均对其发布的智能驾驶做了分层设置,以匹配不同价格带的车型。车企这样做的一个重要目的即是更快地推动"智驾平权",让低价车型也拥有智驾,即便其中一些智驾远达不到高阶智驾的水准。目前,特斯拉是纯视觉方案的坚持者,而国内车企多认为激光雷达是必备传感器。这一争论的本质在于"软件定义"与"硬件冗余"的博弈。早期的激光雷达大多依赖进口,且价格昂贵,此后国产激光雷达不断降低价格并迅速统治了市场。在这一背景下,国内智能驾驶技术路线之争已从"非此即彼"转向多模态融合,激光雷达在高端车型成为"配置门槛",而纯视觉通过数据闭环和超算训练持续进化,普遍搭载在中低端车型上。卓驭科技总经理沈劭在3月底举办的2025中国电动汽车百人会论坛上对经济观察报说,视觉方案不是妥协,而是对算法自信的表现。视觉方案让10万元级别的乘用车都具备城市领航功能,加速了高阶智能驾驶的普及。轻舟智航联合创始人、CEO于骞则认为,视觉传感器是一个非常重要的传感器,但中国的道路环境非常复杂,增加激光雷达会使得安全性提高。对于一些高端豪华车型,增加激光雷达也是很有意义的。王建胜表示:"并不是说纯视觉不好、不对,但激光雷达决定的是智能驾驶的上限。"其言外之意是,即便是低层级智驾方案,在激光雷达的辅助下,智驾实力也会更强。2024年中国新能源汽车年产量突破1000万辆,L2及以上智驾渗透率超过55%,今年高阶智驾开始向10万元价位的车型普及。在头部车企推动"智驾平权"的浪潮下,2025年也被称为"全民智驾元年"。很多车企的高管相信,智能驾驶已是新能源汽车的核心价值。行业机构预测,到2025年底,NOA渗透率会到20%,在未来两三年内,乘用车NOA渗透率会更大面普及。"智驾平权"意味着NOA的装载量会大幅增加。"以前可能系统里有一个小问题,装机量少看不出来,今年有了大几百万辆的装机量之后,会发现任何一个小问题都会暴露出来。"于骞对经济观察报说。中国人保副总裁于泽近日表示,智能驾驶的主要问题不在个体风险,而在OTA升级或系统故障带来的群体性风险。据乘联会数据,2024年37%的L2级智驾事故因系统误判引发。"不敢想象,如果按这个比率,在智驾大力度普及后,得有多少事故。"王建胜说。

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving system (NOA) in the Xiaomi SU7 vehicle. The AI system was in use and failed to prevent the collision, which directly caused fatal injuries and fire. The malfunction or limitations of the AI system, combined with safety feature failures (electronic door locks), contributed to the harm. The article also discusses systemic issues with AI driving technology, safety standards, and misleading marketing, but the core event is a fatal accident caused in part by AI system failure. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction and use directly led to injury and death.
Thumbnail Image

高精度地图成为智能驾驶必不可少安全件 四维图新为智驾提供安心体验

2025-04-03
证券之星
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and application of AI-enabled high-precision maps for intelligent driving, emphasizing their importance for safety and system reliability. There is no mention of any realized harm, malfunction, or misuse related to these AI systems. The content is primarily informative about ongoing technological progress and safety compliance, without reporting any incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context and updates on AI systems in intelligent driving without describing a new AI Incident or AI Hazard.
Thumbnail Image

"慎用智驾"!多地高速打出标语,最新回应

2025-04-05
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (intelligent driving assistance) and their use on highways. The warnings indicate a recognition of potential risks associated with these AI systems, especially in complex or construction-affected road conditions. Since no actual harm or incident has been reported, but there is a plausible risk of harm if these systems are misused or fail in such conditions, this qualifies as an AI Hazard. The event is about precautionary measures and warnings to prevent possible future harm rather than describing an incident where harm has already occurred. Therefore, it is best classified as an AI Hazard.
Thumbnail Image

21Tech 新鲜早科技丨网友曝小米车主驾驶中睡着;李想呼吁统一智能驾驶的中文名称;特斯拉发布人形机器人最新视频

2025-04-03
21jingji.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi assisted driving system is an AI system involved in the event. The driver sleeping while the system is engaged indicates a misuse or overreliance on the AI system, which could plausibly lead to harm (e.g., accident or injury) if the system fails to detect or respond appropriately. However, the article does not report any actual accident or injury resulting from this behavior, so it does not meet the threshold for an AI Incident. The other news items are general updates or announcements about AI research, leadership, and product development without direct or indirect harm. Thus, the primary event with potential risk is the Xiaomi assisted driving misuse, qualifying as an AI Hazard. The rest of the content is complementary information about AI developments.
Thumbnail Image

小米事故警示录:谁是吞噬生命的帮凶?

2025-04-03
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the intelligent driving system in Xiaomi SU7 cars. The system's malfunction or limitations (e.g., inability to detect certain static obstacles, strict reliance on high-precision maps without real-time obstacle recognition) directly contributed to a fatal accident. The harm is realized (death), and the AI system's role is pivotal, as the accident was linked to overreliance on the AI system's capabilities, which were overstated by marketing. This meets the criteria for an AI Incident because the AI system's use and malfunction directly led to injury or harm to a person. The article also discusses broader systemic issues but the core event is a realized harm caused by AI system use.
Thumbnail Image

涉及车联网网络安全、自动驾驶上路等,一批国家及地方政策4月开始实施

2025-04-03
chinatimes.net.cn
Why's our monitor labelling this an incident or hazard?
The article discusses the rollout of policies and standards for intelligent connected vehicles and autonomous driving, which involve AI systems. However, it does not describe any actual harm or incident caused by AI system malfunction or misuse. The focus is on the establishment of safety and security standards, legal regulations, and pilot programs to support safe AI deployment. Therefore, this is complementary information that provides context and updates on governance and safety measures in the AI ecosystem related to autonomous vehicles, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

智驾技术的理想与现实 从"全民智驾"到安全反思

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an L2 intelligent driving assistance system, whose malfunction or misuse has directly resulted in a fatal accident causing loss of life, which is a clear harm to persons. The article explicitly links the accident to the limitations and misuse of the AI system, including misleading marketing and inadequate user education, which contributed to the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and its shortcomings have directly or indirectly led to injury and death.
Thumbnail Image

智能驾驶 ≠ 无人驾驶!追求进步和规避风险同样重要

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article is primarily an analytical and informative piece about the state of intelligent driving technology and its challenges. It does not describe any realized harm or incident caused by AI systems, nor does it report a specific event that could plausibly lead to harm imminently. It discusses potential risks and the need for caution and regulation but does not present a concrete AI hazard event. Therefore, it fits best as Complementary Information, providing context, risk awareness, and guidance on managing AI-related risks in intelligent driving, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

车企智驾狂奔:激进营销撞上死亡红线

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of intelligent driving assistance technologies (NOA, AEB, L2+ systems) that use AI for perception, decision-making, and control. The fatal accident is linked to these systems, with discussion on whether safety features triggered appropriately and the user's overreliance on the AI system. The article also documents multiple complaints about AI system failures and misleading marketing, which have led to real harm including death and injuries. This meets the definition of an AI Incident as the AI system's use and malfunction have directly or indirectly caused harm to people. The article also discusses systemic issues in the industry, but the presence of actual harm takes precedence over potential hazards or complementary information.
Thumbnail Image

涉及车联网网络安全、自动驾驶上路等,一批国家及地方政策4月开始实施

2025-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of intelligent connected vehicles and autonomous driving technologies. However, it does not report a direct or indirect harm caused by AI system malfunction or misuse. Instead, it discusses the rollout of policies and standards aimed at preventing such harms and facilitating safe deployment. The mention of a prior accident serves as background to justify the new policies but does not itself describe an AI Incident. The article's main focus is on governance and safety standard implementation, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

小米SU7事故戳破智驾神话,"安全为基"还要被忽视多久?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves a vehicle equipped with an intelligent driving system, which qualifies as an AI system under the definition. The accident caused by this system's use has led to serious harm, fulfilling the criteria for an AI Incident. The article's focus on the accident and its implications for safety confirms that harm has occurred due to the AI system's use. Although the cause is under investigation, the direct link between the AI system's operation and the accident is clear. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

激光雷达降本的刀,伤不了大算力视觉方案的腰!

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the technical debate and advancements in AI-powered autonomous driving systems, particularly vision-based versus LiDAR-based approaches. It does not describe any incident of harm, violation, or disruption caused by AI systems, nor does it indicate a credible risk of such harm occurring imminently. The content is primarily informative and analytical, discussing ongoing technological progress and industry competition without reporting a specific AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context and understanding of the AI ecosystem in autonomous driving.
Thumbnail Image

小米SU7夺命2秒:全民智驾背后被美化的智能、被弱化的"辅助"

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA intelligent driving assistance) that was active during the accident. The system issued warnings but failed to prevent the collision due to limited detection capabilities and short reaction time. The accident caused fatalities, constituting harm to persons. The AI system's malfunction and the overreliance on its capabilities by the driver are direct contributing factors. The article also discusses regulatory and safety implications, but the primary event is a realized harm caused by AI system use and malfunction, fitting the definition of an AI Incident.
Thumbnail Image

在深圳开车,什么情况你会使用智能驾驶?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driving assistance but does not describe any actual harm or accident caused by these systems. It also does not present a specific credible risk event that could plausibly lead to harm. The focus is on user experiences, opinions, and legal uncertainties, which are informative but do not constitute an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, as it provides context and societal response regarding AI systems in driving without reporting a new incident or hazard.
Thumbnail Image

"智驾"幻觉

2025-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (assisted driving system with Level 2 autonomy) whose use directly led to a fatal traffic accident causing harm to people. The system's malfunction or limitations in handling a complex driving scenario contributed to the incident. The article explicitly links the AI system's involvement to the harm and discusses the risks of overreliance due to misleading marketing. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to injury and death.
Thumbnail Image

三条人命,不是技术的代价,而是价值观的苦果

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (L2 autonomous driving technology) whose misuse and overreliance by drivers, combined with misleading marketing and insufficient regulation, have directly led to fatal harm (three deaths). The AI system's role is pivotal as it was the technology being relied upon incorrectly, contributing to the accident. This fits the definition of an AI Incident because there is direct harm to persons caused indirectly by the AI system's use and misuse. The article does not merely discuss potential risks or regulatory responses but reports on an actual fatal accident linked to the AI system's deployment and misunderstanding, thus qualifying as an AI Incident.
Thumbnail Image

智能驾驶引发交通事故责任在人还是车

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent driving assistance) in operation at the time of a serious traffic accident. The AI system's role in detecting obstacles, issuing warnings, and initiating deceleration is described, as well as its limitations (e.g., not responding to certain obstacles). The accident caused harm (serious traffic collision and vehicle fire), and the AI system's involvement is direct and pivotal. The article also discusses legal and responsibility issues related to AI use in driving. This fits the definition of an AI Incident because the AI system's use and performance directly led to harm, even if driver responsibility is also considered. There is no indication that harm was only potential or that the article is primarily about responses or broader context, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

小米SU7事故发生后 新能源车险怎么赔?

2025-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 uses intelligent driving assistance, which qualifies as an AI system. The article centers on the aftermath of an accident involving this AI system, focusing on liability, insurance claims, and the broader implications for AI-enabled vehicle insurance. While the accident involved the AI system, the article does not confirm that the AI system malfunctioned or directly caused harm; the cause is still under investigation. The discussion is about the complexities of responsibility and insurance coverage rather than a new AI Incident or a plausible future hazard. Hence, it fits the definition of Complementary Information, as it provides supporting context and analysis related to an AI system and its societal and governance implications following an incident.
Thumbnail Image

北京用制度创新铺路,自动驾驶驶入民生深水区

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article discusses the use and regulation of AI systems in autonomous vehicles, which qualifies as AI systems involvement. However, it does not report any incident or harm caused by these systems, nor does it describe any plausible future harm or risk. Instead, it focuses on legislative and institutional innovations facilitating safe deployment and societal benefits. Therefore, it fits the category of Complementary Information, as it provides context and updates on AI ecosystem developments and governance responses without reporting an AI Incident or Hazard.
Thumbnail Image

90%消费者愿为买单,从7万元到豪华市场!汽车消费格局之变→

2025-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically intelligent driving technologies and AI-powered automotive innovations. However, it does not describe any event where AI use has led to harm or malfunction, nor does it indicate a plausible risk of harm occurring imminently. Instead, it focuses on market trends, consumer willingness to pay for AI features, and industry strategies to advance AI capabilities. This aligns with the definition of Complementary Information, which includes updates and contextual information about AI systems and their ecosystem without reporting new incidents or hazards.
Thumbnail Image

所有智驾的AEB,都可能遭遇"失灵"?

2025-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the assisted driving system including AEB) whose malfunction or limitations directly contributed to a fatal accident causing injury and death. The article details how the AI system's perception and decision-making failed to prevent the crash, which constitutes harm to persons. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction directly led to harm.
Thumbnail Image

高精度地图成为智能驾驶必不可少的安全件

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content focuses on the role and importance of AI-powered high-precision maps in intelligent driving, describing their development, features, and safety compliance. There is no mention of any actual or potential harm, incident, or hazard caused by these AI systems. The article serves to inform about technological progress and safety practices rather than reporting an AI incident or hazard. Therefore, it fits the definition of Complementary Information, providing context and updates on AI systems in the intelligent driving ecosystem without describing a new incident or hazard.
Thumbnail Image

8款"销冠"电车营销话术盘点,他们是怎么"夸"智驾的?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA intelligent assisted driving) whose use directly preceded a fatal accident causing loss of life, fulfilling the criteria for harm to persons. Additionally, the article documents how exaggerated marketing of AI driving capabilities leads to consumer misunderstanding and misuse, indirectly contributing to harm. The presence of multiple similar incidents and regulatory responses further supports the classification as an AI Incident. The AI system's malfunction or misuse is a direct or indirect cause of harm, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

EV晨报 | 算法并非万能,央视呼吁"智驾"也请紧握方向盘

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of intelligent driving algorithms (L2-L3 level driver assistance systems). It reports on actual traffic accidents caused by the use or misuse of these AI systems, thus directly leading to harm to persons. The discussion about the limitations and risks of these systems, and the call for driver responsibility and better training, confirms that the AI system's malfunction or misuse has already caused harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to injury or harm to people.
Thumbnail Image

智驾狂飙,年轻人遭遇"惊魂历险记"

2025-04-06
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in intelligent driving assistance (NOA and similar) and details a fatal accident caused while the AI system was active, as well as multiple other incidents where AI system malfunctions or limitations led to near-accidents or collisions. The harms include death, injury risk, and psychological trauma, which fall under injury or harm to persons. The AI system's malfunction or use is a direct contributing factor. The article also discusses misleading marketing that caused overreliance, which indirectly contributed to harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"无需司机""0自燃",电车车企是如何花式"夸"智驾的?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (intelligent driving assistance, NOA, L2-level) involved in serious traffic accidents causing deaths and injuries. It discusses how misuse (drivers sleeping or removing hands from the wheel) and overreliance on these AI systems contributed to accidents. The harm is direct and materialized (fatalities, injuries). The article also discusses misleading marketing that causes users to misunderstand AI capabilities, which indirectly contributes to harm. Hence, the event meets the criteria for an AI Incident, as the AI system's use and misuse have directly or indirectly led to injury and harm to people.
Thumbnail Image

智能驾驶的"进阶路":安全始终是汽车的底线

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the L2 NOA assisted driving system active during the accident. The system's limited warning time and the driver's overreliance or misunderstanding contributed to the fatal crash. The harm (three deaths) is direct and severe, caused by the AI system's use and its limitations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction or insufficient capability directly led to injury and death.
Thumbnail Image

L2级≠自动驾驶:智能驾驶技术狂欢下的冷思考

2025-04-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of intelligent driving (L2-level partial automation) systems, which are AI-based. However, it does not describe a specific event where the AI system's development, use, or malfunction directly or indirectly caused harm (AI Incident), nor does it describe a credible imminent risk or near miss (AI Hazard). Instead, it provides an overview of the technology's current capabilities, challenges, and recommendations for safe use and governance. This fits the definition of Complementary Information, as it enhances understanding and informs about the broader AI ecosystem and safety considerations without reporting a new incident or hazard.
Thumbnail Image

小米SU7事故,会拖慢"全民智驾"的节奏吗?

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved the use of an intelligent driving assistance system (NOA), which is an AI system designed to assist driving tasks. The article details that the system issued a risk warning and required the driver to take over within a very short time frame (2 seconds), after which the fatal accident occurred. This indicates that the AI system's operation and its interaction with the driver were directly linked to the harm (death of passengers). The article also discusses the broader implications for trust in AI-driven intelligent driving systems and the safety challenges of new energy vehicles. Since the AI system's use directly led to injury and death, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7事件疑问:主驾靠背为什么半躺,遇修路为何没提前减速?

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's intelligent driving system is an AI system as it performs autonomous driving functions using vision-based AI. The accident occurred while the AI system was in use, and the system failed to provide adequate early warnings about road conditions, which is a malfunction or limitation contributing to the incident. The driver's distraction and failure to take over also played a role, but the AI system's inability to detect and warn about the road repair is a contributing factor to the harm. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's malfunction or limitations during use.
Thumbnail Image

谁在制造"陷阱"?年轻的生命逝去七天后,再谈车企智驾营销

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in intelligent driving (Level 2 driver assistance) and describes a fatal accident caused by the AI system's failure to detect obstacles and provide timely warnings. The AI system's malfunction and the misleading marketing that led consumers to overestimate the system's capabilities directly contributed to the harm (death of three people). This meets the definition of an AI Incident as the AI system's use and malfunction directly led to injury and harm to persons. The article also discusses systemic issues but the core event is a realized harm caused by AI system failure and misuse, not just a potential hazard or complementary information.
Thumbnail Image

智能驾驶≠自动驾驶!生命岂能托付冷机器

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance) that was active during the incident. The harm (fatal collision and vehicle fire) directly resulted from the interaction between the AI system's outputs and the driver's response. The AI system's role was pivotal in the chain of events leading to injury and death, fulfilling the criteria for an AI Incident. The article explicitly links the misuse and misunderstanding of the AI system to the fatal outcome, indicating direct or indirect causation of harm.
Thumbnail Image

电动汽车加速跑丨AI推动汽车加速智能化 安全边界亟待明确

2025-04-05
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the context of intelligent driving and autonomous vehicle technologies. However, it does not describe any realized harm or malfunction resulting from AI use; rather, it discusses the plausible future risks and the need for safety and legal frameworks. Therefore, the event qualifies as an AI Hazard because it concerns circumstances where AI use in vehicles could plausibly lead to incidents if safety boundaries are not properly established and managed. It is not Complementary Information because the focus is not on updates or responses to past incidents but on the emerging risks and challenges. It is not an AI Incident since no actual harm has occurred, and it is not Unrelated as AI involvement is central to the discussion.
Thumbnail Image

小米汽车惨剧能证明智驾是智障吗?车企是不会推翻自己的

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving/ADAS) whose use and limitations contributed to a fatal accident, causing harm to people. The article details the timeline of the AI system's warnings and the subsequent crash, indicating the AI system's involvement in the incident. The harm (fatalities) has occurred, and the AI system's role is pivotal, even if the ultimate responsibility is debated. This fits the definition of an AI Incident, as the AI system's use directly or indirectly led to injury or harm to persons. The article is not merely discussing potential risks or providing complementary information but reporting on a real incident involving AI-related harm.
Thumbnail Image

智驾事故如何担责?曾有辅助驾驶事故车主负全责判例,业内人士称自动驾驶和辅助驾驶概念模糊

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an L2-level intelligent driving system (an AI system by definition) that was active during a fatal traffic accident causing deaths. The AI system's limitations and the driver's failure to fully control the vehicle led directly to the harm. The article also discusses legal responsibility and regulatory responses, but the core event is a realized harm caused by the AI system's use and malfunction/limitations. Hence, it meets the criteria for an AI Incident due to injury/harm to persons directly linked to the AI system's involvement in the accident.
Thumbnail Image

媒体翻雷军提醒"智驾是辅助驾驶"的旧账,是在"洗地"吗?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (intelligent driving/driver assistance system) and its use. The video shows misuse of the system (driver sleeping with hands off the wheel), which is a failure to comply with safe use guidelines. The AI system issues warnings and slows down the vehicle if the driver is unresponsive, but the misuse still poses a credible risk of harm. The event does not report an actual accident or injury, so it is not an AI Incident. However, the potential for harm due to overreliance and misleading marketing is credible and plausible, fitting the definition of an AI Hazard. The broader discussion about terminology and marketing practices is complementary but secondary to the main event of plausible future harm from AI misuse.
Thumbnail Image

小米事故警示录:谁是吞噬生命的帮凶?

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 intelligent driving system is an AI system involved in autonomous or assisted driving functions. The fatal accident is linked to the AI system's limitations and the overtrust by users influenced by exaggerated marketing claims. The article details how the AI system failed to detect static obstacles and how the marketing misled users into overreliance, which directly or indirectly caused harm (death). This fits the definition of an AI Incident, as the AI system's use and malfunction have led to injury or harm to a person. The article also discusses systemic issues around AI system deployment and user education, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

小米SU7事故反思,竞争要从功能PK转向安全优先!

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving system (NOA) in the Xiaomi SU7 vehicle. The AI system's use and malfunction (inadequate perception and delayed human takeover) directly led to a fatal accident causing loss of life and property damage. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm to persons (fatalities) and harm to property. The article also discusses broader implications for AI safety in autonomous driving, but the core event is a realized harm caused by the AI system's failure or limitations.
Thumbnail Image

明确安全性是技术发展的前提条件

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article discusses the regulatory framework for autonomous driving vehicles, which are AI systems controlling vehicles at L3 and above levels. While it does not report any actual harm or incidents caused by these AI systems, it clearly addresses the potential risks and safety concerns associated with their deployment. The regulation aims to mitigate plausible future harms by setting safety standards, monitoring requirements, and legal responsibilities. Therefore, this event is best classified as an AI Hazard, as it concerns the plausible future risks and governance of AI systems in autonomous vehicles rather than an actual incident or harm.
Thumbnail Image

3秒生死线:小米SU7事故背后的人机共驾"死亡灰区

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance with NOA) whose use and malfunction (insufficient handover time and inadequate warnings) directly led to a fatal accident and vehicle explosion, causing harm to the driver and property. The AI system's role is pivotal in the incident, as the short reaction window and system design flaws contributed to the crash. The article also discusses regulatory and safety standard issues related to AI driving systems. This fits the definition of an AI Incident because the AI system's malfunction and use directly caused injury and harm to a person, fulfilling criterion (a).
Thumbnail Image

小米SU7驾驶者男友聊天记录曝光,事件迎来大反转

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's intelligent driving system (an AI system) was actively involved in the incident by issuing warnings and attempting to decelerate and steer the vehicle. However, the system's late warning and insufficient deceleration led to a collision causing fatal harm. The driver's overreliance on the AI system and failure to intervene timely also contributed. This constitutes an AI Incident because the AI system's malfunction and use directly led to injury and death (harm to persons).
Thumbnail Image

3秒生死线:小米SU7事故背后的人机共驾"死亡灰区"

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance with NOA) that was in use and whose operational design (handover timing, insufficient warning, and sensor limitations) directly contributed to a fatal accident. The harm (death and injury) has occurred, and the AI system's role is pivotal in the chain of causation. The article details the insufficient reaction time given by the AI system to the driver and the resulting crash, which fits the definition of an AI Incident. It is not merely a potential hazard or complementary information, but a realized harm caused by AI system use and malfunction.
Thumbnail Image

梅赛德斯-奔驰申请用于自动驾驶车辆的控制方法和计算机程序专利,明显缩短通行时间

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The patent application involves an AI system designed for autonomous vehicle control, which includes detecting other vehicles and road users, evaluating priority for right of way, and controlling vehicle movement accordingly. However, the event only describes the development and intended use of the AI system, with no indication of any harm or malfunction occurring or any plausible risk of harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It is a general AI-related development and thus classified as Complementary Information.
Thumbnail Image

智驾汽车还能不能买?智能驾驶的"恐怖谷"时刻

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving system) whose malfunction directly led to a fatal accident causing loss of life, which is a clear harm to persons. The article details specific AI system failures and their role in the incident, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI system's use and malfunction, not merely a potential risk or complementary information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

澎湃新闻:智驾不是自动驾驶司机车企都不能含糊

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of assisted driving technologies (L2 level AI driving assistance). It discusses the misuse and misunderstanding of these AI systems by drivers, which could plausibly lead to harm, but does not report any actual harm or accident caused by the AI system. Therefore, it does not meet the criteria for an AI Incident. Instead, it highlights the potential risks and the need for clear communication and responsible behavior, which fits the definition of an AI Hazard. The article also serves as complementary information by clarifying misconceptions and urging caution, but since it primarily focuses on the plausible risk of harm due to misuse of AI-assisted driving, AI Hazard is the most appropriate classification.
Thumbnail Image

安徽高速智驾警示语:背后是安全责任的再强化!

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (Level 2 intelligent driving assistance) that indirectly led to a fatal accident, which is a harm to human health. The article details how the AI system failed to recognize construction obstacles, contributing to the crash. The subsequent upgrade of warning signs and emphasis on driver responsibility are responses to this incident. Therefore, this qualifies as an AI Incident because the AI system's malfunction and use directly contributed to harm.
Thumbnail Image

智驾"不能过度营销!三条人命能否给小米敲响警钟?

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent driving assistance system in the Xiaomi SU7 vehicle. The system was in use at the time of the accident, issuing warnings and attempting to slow the vehicle. The collision and resulting deaths directly followed the AI system's operation and the driver's interaction with it. The harm is clearly realized (three fatalities), and the AI system's role is pivotal in the chain of events leading to the incident. The article also discusses safety feature discrepancies and overmarketing, but the core issue is the AI system's failure to prevent the fatal crash. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7高速爆燃事件,看了网友晒出的现场图,真叫人心里发怵

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as intelligent assisted driving engaged during the accident. The AI system provided obstacle warnings and began deceleration, indicating active use of AI in vehicle operation. The harm (high-speed crash and fire) occurred with the AI system's involvement, even though the driver switched to manual control before impact. This meets the criteria for an AI Incident because the AI system's use directly contributed to the circumstances of the accident and harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

鼓吹自动驾驶的风,应该"刹停"了

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI-based driver assistance systems (L2 automated driving) which have directly contributed to fatal accidents. The article explicitly links the harm (loss of life) to overreliance on AI-assisted driving features that are not fully autonomous and require human supervision. This constitutes an AI Incident because the AI system's use and misunderstanding have directly led to injury and death (harm to persons).
Thumbnail Image

国家应急管理部谈小米SU7事故:目前市售智驾车辆最多只属于L2级 - cnBeta.COM 移动版

2025-04-04
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driving assistance (Level 2 autonomous driving features). While no actual harm or incident is described, the discussion centers on the plausible risk of harm arising from misuse or overreliance on these AI systems, which could lead to accidents or injuries. Therefore, this qualifies as an AI Hazard because it highlights credible potential for harm due to the AI system's use or misuse, but no direct harm has yet occurred according to the article.
Thumbnail Image

智驾不响应锥桶、水马、石头、动物等障碍物,这符合行业标准吗?

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and limitations of an AI system (the AEB autonomous driving safety feature) that directly impacts vehicle safety. The system's failure to respond to certain obstacles has led to accidents or increased risk of accidents, constituting harm to persons and communities. The article provides evidence of realized harm (accidents occurring due to non-response) and discusses the AI system's role in these incidents. Therefore, this qualifies as an AI Incident because the AI system's use and limitations have directly led to harm.
Thumbnail Image

小米车祸事件冷思考:安全是最大的豪华!

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly references a car accident involving Xiaomi's intelligent driving system, which uses AI technologies such as vision and radar for assisted driving. The accident highlights the system's limitations and risks, especially in complex environments, leading to safety harm. The discussion of AI system capabilities, safety boundaries, and regulatory needs further supports the AI system's involvement in causing harm. Since the event involves realized harm to people due to the AI system's malfunction or limitations, it meets the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses broader trends and responses but centers on the accident and its implications for safety, confirming the classification as an AI Incident.
Thumbnail Image

经观社论|智能驾驶不能脱离"以人为本

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving/assisted driving technology) that was active at the time of a fatal accident causing loss of life, which is a direct harm to persons. The article explicitly states the vehicle was in NOA mode, an AI-assisted driving system, before the crash. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death. The article also discusses broader implications and regulatory considerations but the core event is a realized harm caused by AI system use.
Thumbnail Image

电动车起火悲剧频发,电动汽车安全何时不再让人心惊?

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a fatal electric vehicle fire incident caused by a collision, with the vehicle equipped with intelligent driving systems. These systems involve AI for autonomous or assisted driving functions. The death of three individuals is a direct harm to persons. The article also highlights safety and reliability issues of AI-based intelligent driving, linking the AI system's malfunction or misuse to the fatal outcome. Hence, the event meets the criteria for an AI Incident as the AI system's use and potential malfunction directly contributed to the harm.
Thumbnail Image

智驾狂飙,年轻人遭遇"惊魂历险记

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as intelligent assisted driving (NOA/autopilot) in use at the time of a fatal accident, directly causing harm (deaths). The article also details other incidents and near-misses linked to the use or malfunction of such AI systems, demonstrating direct or indirect harm to human health and safety. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use and malfunction have directly led to injury and death.
Thumbnail Image

驾驶者是否存在过度依赖智驾的情况?小米汽车是否对智驾功能做夸张宣传?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Level 2 assisted driving relying on AI vision) whose use and possible misuse by the driver contributed to a fatal accident, causing injury or harm to a person. The article explicitly discusses the driver's overreliance on the AI system and the manufacturer's potentially exaggerated claims, which may have led to misunderstanding and misuse. This meets the criteria for an AI Incident because the AI system's use has indirectly led to harm (fatality).
Thumbnail Image

官方回应"车主滥用辅助驾驶"事件:仍需手动控制车辆

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system (the assisted driving system) is explicitly involved, and its misuse by the driver is described. Although no direct harm has occurred, the misuse could plausibly lead to harm such as traffic accidents or injury. Therefore, this event qualifies as an AI Hazard because it highlights a credible risk of harm due to misuse of an AI system. The official response and discussion of regulatory context provide complementary information but the main focus is on the potential for harm from misuse, not on an incident that has already caused harm.
Thumbnail Image

前华为智驾负责人苏菁:特斯拉自动驾驶断代领先 自动驾驶搞不定没资格做机器人

2025-04-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of autonomous driving technology but does not describe any event where AI development, use, or malfunction has directly or indirectly caused harm or disruption. It also does not indicate any plausible future harm from AI systems. Instead, it provides expert insights and reflections on the state of AI technology and industry perspectives, which fits the definition of Complementary Information as it enhances understanding of AI ecosystem developments without reporting a new incident or hazard.
Thumbnail Image

小米SU7高速事故的深度启示,技术、人性与"原罪"的博弈

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved an AI system (L2 assisted driving with NOA and AEB features) whose malfunction and inherent limitations directly contributed to a fatal crash. The article explicitly describes how the AI system's warnings and interventions were inadequate, the driver's overtrust in the system, and the failure of safety redundancies, all culminating in harm to human life. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death, fulfilling harm criteria (a).
Thumbnail Image

智能驾驶≠自动驾驶 交警提醒:不能将车辆完全托管给辅助驾驶

2025-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly references a fatal collision involving a vehicle equipped with intelligent driving assistance, which is an AI system designed to aid driving. The harm (fatal accident) has occurred, and the article discusses the risks of overreliance on such AI systems, including system limitations and potential failures. The involvement of the AI system in the incident is indirect but pivotal, as the system's assistance and the driver's overreliance contributed to the accident. The article also includes expert warnings about the dangers of treating assisted driving as full autonomous driving, reinforcing the link between AI system use and harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米事件致多地高速设勿用智驾路牌 交警提醒别把智驾当玩手机的借口

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the crash is equipped with intelligent driving assistance features classified as L2 automation, which qualifies as an AI system under the definition. The incident directly involves the use and malfunction or misuse of this AI system, which has led to physical harm (vehicle crash and fire) and safety risks. The police warnings and new signage are responses to this incident. Therefore, this event qualifies as an AI Incident because the AI system's use and potential misuse have directly led to harm and safety concerns.
Thumbnail Image

小米SU7高速事故的深度启示,技术、人性与''原罪''的博弈

2025-04-05
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved an AI system (L2 assisted driving with NOA and AEB features) whose malfunction or inherent limitations, combined with user overreliance and insufficient emergency response, directly led to a fatal collision. The article explicitly links the AI system's performance and user interaction to the harm (death) caused. This fits the definition of an AI Incident, as the AI system's use and malfunction directly led to injury or harm to persons. The detailed analysis of system warnings, braking performance, and user behavior confirms the AI system's pivotal role in the harm.
Thumbnail Image

小米车祸后,智驾回归理性

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the intelligent driving assistance system with NVIDIA DRIVE Orin chip, multiple sensors, and VLM software) whose use directly led to a fatal car crash. The AI system's malfunction or limitation in recognizing and responding to a complex road hazard (a construction detour with a concrete barrier) was a contributing factor in the accident. This caused injury and death, which qualifies as harm to persons. Therefore, this is an AI Incident as the AI system's use directly led to harm. The article also provides broader context and reflections on AI driving technology but the core event is a realized harm caused by AI system use.
Thumbnail Image

小米车祸后,智驾回归理性

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving assistance system in the Xiaomi SU7 car, which uses AI components such as Nvidia DRIVE Orin chip and VLM software. The system's failure to adequately detect and respond to a road hazard (a concrete barrier at a detour) and the very short human takeover time directly caused a fatal crash. This meets the definition of an AI Incident because the AI system's malfunction and use directly led to injury and death (harm to persons). The article also highlights systemic issues with AI driving assistance technology and user misunderstanding, but the core event is a realized harm caused by AI system malfunction and use.
Thumbnail Image

智驾热潮下的"冷思考":安全才是最大的豪华

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an accident caused by a Xiaomi intelligent vehicle on the highway that resulted in a collision and fire, which is a direct harm to people's health and safety. The intelligent driving system (an AI system) is central to the incident, as the article discusses the limitations and risks of current AI-driven driving assistance technologies and the consequences of misleading marketing and insufficient regulation. The harm is realized, not hypothetical, and the AI system's malfunction or misuse is a contributing factor. Hence, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

传小米汽车内测"安全分Beta" 可评估驾驶行为 降低事故风险 - cnBeta.COM 移动版

2025-04-02
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The 'Safety Score Beta' is an AI system that analyzes driving behavior data to generate a safety score and provide feedback. Its use aims to reduce accident risk by influencing driver behavior. Since it involves the use of AI to assess and influence driving safety, and it is intended to reduce harm (accidents), but no harm or incident is reported as having occurred, this qualifies as a complementary information item about an AI system deployment aimed at safety improvement rather than an incident or hazard. There is no indication of malfunction or misuse causing harm, nor a plausible risk of harm from the system itself.
Thumbnail Image

从智驾到司机接管 专家:最少需10秒才能唤回分心的驾驶员 - cnBeta.COM 移动版

2025-04-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous driving (an AI system) and concerns about the safety implications of the system's design, specifically the time allowed for driver takeover. Although no specific incident of harm is described, the article highlights a credible risk that insufficient warning time could lead to accidents or harm, thus representing a plausible future harm scenario. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm due to inadequate human-machine interaction timing.
Thumbnail Image

智驾狂飙,年轻人遭遇"惊魂历险记"_手机网易网

2025-04-05
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of intelligent driving assistance (NOA) used in vehicles. It details a fatal accident where the AI system was active and failed to prevent the crash, resulting in deaths. Additionally, it recounts multiple near-accidents and hazardous situations caused by AI system malfunctions or limitations. These constitute direct or indirect harm to persons, fulfilling the criteria for an AI Incident. The article is not merely about potential risks or general information but documents actual harm and incidents caused by AI system use.
Thumbnail Image

安徽高速新增提醒:路况复杂,勿用智能辅助驾驶 - cnBeta.COM 移动版

2025-04-04
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves a vehicle equipped with intelligent assisted driving, which is an AI system. The accident caused serious harm (three deaths), and the warnings about cautious use of intelligent driving indicate the AI system's involvement in the driving process. The accident's investigation and the family's interaction with Xiaomi staff further support the connection. Therefore, this is an AI Incident as the AI system's use has directly or indirectly led to injury and death, fulfilling the criteria for an AI Incident.
Thumbnail Image

智能驾驶事故陷入罗生门困局

2025-04-04
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an intelligent driving assistance system (NOA) that was active during the accident. The system issued warnings but failed to prevent the collision, and the driver's limited reaction time was insufficient to avoid harm. The accident resulted in fatalities, which is a direct harm to people (harm category a). The article also highlights systemic issues such as data control by manufacturers and legal gaps, but the primary classification is based on the actual fatal accident caused by the AI system's malfunction or limitations. The fabricated rumor is a separate complementary detail but does not change the classification. Therefore, this event is an AI Incident.
Thumbnail Image

安徽高速回应改智驾警示语:未接通知 或为路段相关单位调整 - cnBeta.COM 移动版

2025-04-05
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article discusses the use and communication about intelligent driving assistance systems (AI systems) and official safety messaging, but there is no indication of any harm, malfunction, or misuse leading to injury, rights violations, or other harms. The focus is on clarifying the capabilities and limitations of AI driving assistance and the importance of driver attention. This fits the definition of Complementary Information as it provides context, official guidance, and responses related to AI systems without reporting a new incident or hazard.
Thumbnail Image

安徽高速回应提醒慎用辅助驾驶:安全第一 - cnBeta.COM 移动版

2025-04-06
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of intelligent driver assistance systems, which are AI-based. The authorities' warnings indicate a plausible risk of harm if these systems are overrelied upon, especially during busy holiday periods. However, no actual harm or incident has been reported; the focus is on caution and safety recommendations. Therefore, this qualifies as an AI Hazard, as the use or misuse of these AI systems could plausibly lead to harm, but no harm has yet occurred or been reported.
Thumbnail Image

《北京市自动驾驶汽车条例》4月1日起施行-新能源汽车政策-行业政策法规--国际充换电网

2025-04-03
chd.in-en.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous driving vehicles, but the article primarily discusses the enactment of regulations and policies to govern their use and development. There is no indication of any realized harm or incident caused by AI, nor any direct or plausible future harm described. The content fits the definition of Complementary Information as it provides governance context and updates on AI system regulation without reporting an incident or hazard.
Thumbnail Image

对话轻舟智航于骞:智驾平权,核心是安全_手机网易网

2025-04-03
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous driving and discusses their development, deployment, and safety strategies. However, it does not describe any realized harm or direct/indirect incidents caused by these AI systems, nor does it indicate a plausible imminent harm. Instead, it focuses on safety as a priority, technical approaches to improve safety, and industry perspectives. This fits the definition of Complementary Information, as it provides supporting data and context about AI systems and their ecosystem without reporting a new AI Incident or AI Hazard.
Thumbnail Image

专家建议为自动驾驶汽车设立"考驾照"机制

2025-04-03
ebike.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of intelligent driving systems in autonomous vehicles and discusses their development and use. However, it does not describe any realized harm or incident caused by these AI systems. The content is primarily about expert advice and proposed regulatory frameworks to prevent future risks, which fits the definition of Complementary Information. There is no direct or indirect harm reported, nor a specific plausible hazard event occurring now. Therefore, the classification is Complementary Information.
Thumbnail Image

媒体评小米SU7车祸爆燃致3死:车企莫要"卷"丢底线

2025-04-06
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating in NOA mode, an AI-based assisted driving system, at the time of the fatal accident causing three deaths. The article highlights that the accident is partly attributable to the overpromotion and premature deployment of AI driving technologies, which can mislead consumers and contribute to unsafe reliance on these systems. The AI system's malfunction or limitations indirectly led to the harm. This fits the definition of an AI Incident, as the AI system's use directly or indirectly caused injury or harm to persons. The article also discusses broader industry issues but the core event is a fatal accident linked to AI-assisted driving, confirming the classification as AI Incident.
Thumbnail Image

拿辅助当全自动驾驶!有司机驾车双手离方向盘 都不在座位上:网友称应重罚

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-assisted driving system (L2 level) and the driver's misuse of it by treating it as full autonomous driving, which is beyond the system's capabilities. This misuse directly endangers safety, constituting harm to persons. The AI system's involvement is clear, and the harm is direct and realized in the form of dangerous driving behavior. The event is not merely a potential hazard or complementary information but an incident involving AI misuse leading to harm risk. Hence, classification as AI Incident is appropriate.
Thumbnail Image

博主看完小米SU7爆燃事故有感:小白司机千万别开智驾 你或许根本不会踩刹车

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as intelligent driving (L2-L3 level driver assistance) that was involved in a serious accident (explosion/fire) on the highway. The article discusses how overtrust and misunderstanding of the AI system's capabilities contributed to the accident, indicating the AI system's use and limitations played a role in the harm. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to persons (potential injury or fatality implied by explosion and accident). The article also discusses the need for better user education and clearer communication of AI system boundaries, reinforcing the link between AI system use and harm. Therefore, the classification is AI Incident.
Thumbnail Image

2025-04-03
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent driving mode, which was active during the fatal accident causing three deaths. The AI system's malfunction or limitations (e.g., AEB not activating for certain obstacles) and the overreliance on it by the driver directly contributed to the harm. The article details the incident, the role of the AI system, and the resulting harm (loss of life), fulfilling the criteria for an AI Incident. Although driver responsibility is emphasized, the AI system's involvement in the harm is clear and direct. Therefore, the classification is AI Incident.
Thumbnail Image

重新审视"智能驾驶

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (navigation-assisted driving, NOA) that was active during a fatal crash. The AI system's delayed warning and the driver's overreliance on the system contributed directly to the deaths, fulfilling the criteria for an AI Incident. The article explicitly states the harm (loss of life) caused by the AI system's malfunction and the misuse of the technology. The discussion about marketing and regulatory issues supports the context but does not negate the direct harm caused. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

一场车祸引发小米股价两天缩水1200亿港元 - cnBeta.COM 移动版

2025-04-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance) whose use directly led to a fatal accident, causing injury and death (harm to persons). The AI system's malfunction or limitations are central to the incident, fulfilling the criteria for an AI Incident. The article provides detailed information about the AI system's role and the resulting harm, not merely potential or future risks, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

雷军回应SU7爆燃事故 承诺持续配合调查并回应关切

2025-04-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with an AI system (NOA intelligent assisted driving) at the time of the accident. The AI system detected obstacles and attempted to reduce speed, but the driver had to take over and despite this, the vehicle collided with a barrier causing fatal injuries. This constitutes direct harm caused by the use and possible malfunction or limitation of an AI system. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in a fatal accident.
Thumbnail Image

小米SU7高速碰撞起火后,有车主称曾在同路段智驾发生事故,当地回应_腾讯新闻

2025-04-02
QQ新闻中心
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA intelligent driving system is an AI system providing autonomous or semi-autonomous driving assistance. The accidents occurred while this system was in use, and the system failed to recognize or warn about the construction zone and lane changes, which are critical for safe navigation. This failure directly or indirectly contributed to the accidents, including a fatal one. The presence of harm to persons (fatalities) caused or contributed to by the AI system's malfunction or insufficient detection meets the criteria for an AI Incident. The report also discusses inadequate road signage and warnings, but the AI system's inability to adapt or alert the driver is pivotal in the harm caused.
Thumbnail Image

小米回应难消市场质疑 业界呼吁车企进行反思

2025-04-02
经济参考报
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's intelligent driving system is an AI system involved in the event. The accident caused fatalities, which is direct harm to persons. The article explicitly discusses the possible role of the AI system in the accident and the safety concerns it raises. The harm has already occurred, and the AI system's malfunction or use is a contributing factor under investigation. Hence, this is an AI Incident rather than a hazard or complementary information. The article also includes calls for industry reflection and safety improvements, but the primary focus is on the incident and its consequences.
Thumbnail Image

小米SU7:致命车祸致3死,雷军微博发声后大量跟帖护"雷"贬死者家属 - BBC News 中文

2025-04-02
BBC
Why's our monitor labelling this an incident or hazard?
The Xiaomi vehicle was operating in an AI-assisted driving mode (NOA intelligent assisted driving), which qualifies as an AI system. The accident caused three deaths, a clear harm to persons. The AI system detected obstacles and attempted to slow down, but the driver took over and the vehicle collided with a barrier. The AI system's involvement in the accident and the resulting fatalities meet the criteria for an AI Incident, as the AI system's use directly led to harm to persons. The social media reaction and company statements provide context but do not change the classification.
Thumbnail Image

小米SU7高速碰撞事故细节公布,股价应声下跌超5%

2025-04-01
中关村在线
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI system (NOA intelligent assisted driving) at the time of the accident. The AI system detected obstacles and attempted to reduce speed but the vehicle still collided with a concrete barrier. The accident caused harm, triggering emergency services, and raised safety concerns about AI-related vehicle features. The AI system's malfunction or limitations contributed directly to the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm linked to the AI system's use and malfunction.
Thumbnail Image

小米车祸致三人丧生 德媒关注智能辅助驾驶

2025-04-02
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's intelligent driver assistance system (NOA) during the fatal crash. The system was active, provided alerts, and was partially controlled by the driver before the collision. The accident caused three fatalities, which is a clear harm to persons. The AI system's role in the sequence of events leading to the crash is direct and pivotal, as the system's delayed or insufficient hazard recognition and handover timing are under scrutiny. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death. The article also discusses the system's limitations and prior criticisms, reinforcing the AI system's involvement in the harm.
Thumbnail Image

智驾响应是否及时?车辆为何燃烧?车门是否锁死?小米SU7致三死车祸三问_小米集团-W(hk01810)股吧_东方财富网股吧

2025-04-02
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA intelligent driving assistance system) that was active during the accident. The AI system's delayed warning and insufficient intervention contributed to the collision and subsequent fatalities, constituting direct harm to persons. The article also discusses the AI system's limitations and the vehicle's safety features, linking the AI system's performance to the incident. Hence, this qualifies as an AI Incident under the OECD framework because the AI system's use and malfunction directly led to injury and death.
Thumbnail Image

股价跌超5%!刚刚 雷军回应SU7事故

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article describes a fatal traffic accident involving a Xiaomi SU7 vehicle operating under NOA intelligent assisted driving mode. The AI system was actively controlling the vehicle and detected obstacles, issued warnings, and began deceleration, but the driver had to take over before the collision occurred. The accident resulted in three deaths, which is a direct harm to persons. The AI system's involvement in the vehicle's operation and the accident makes this an AI Incident. The CEO's public response and ongoing investigation further confirm the seriousness of the incident.
Thumbnail Image

反思小米SU7高速事故:用户需要"被教育" 车企也需"再教育"

2025-04-03
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance including AEB and NOA) whose limitations and operational boundaries contributed to a fatal accident causing injury and death. The AI system's failure to detect or respond to the guardrail obstacle and the insufficient warning time for driver takeover are direct factors in the harm. The users' overreliance on the AI system, influenced by marketing and misunderstanding, is an indirect factor. The article clearly describes realized harm (fatalities) linked to the AI system's use and malfunction, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

小米SU7高速失事惹争议,三大问题待解|聚焦

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the accident was operating in an AI-assisted driving mode (NOA) with forward collision warning and emergency braking systems active. The AI system detected an obstacle and initiated deceleration, but the collision still occurred, resulting in fatalities. The AI system's limitations and performance are central to the incident, indicating direct involvement of AI in causing harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and possible malfunction directly led to injury and death of persons.
Thumbnail Image

智驾响应是否及时?车辆为何燃烧?车门是否锁死?小米SU7致三死车祸三问

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NOA intelligent driving assistance) that was active during the accident. The AI system detected obstacles and issued warnings but only allowed 2-3 seconds before collision, which was insufficient for the driver to respond effectively. The accident caused three fatalities, which is a direct harm to persons. The AI system's development, use, and possible malfunction or limitations contributed to the incident. The article also discusses the vehicle's fire and door locking issues post-collision, which relate to safety features but do not negate the AI system's role in the incident. Given the direct causal link between the AI system's operation and the fatal harm, this event is classified as an AI Incident.
Thumbnail Image

小米汽车事故5大疑问

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous driving and AEB) whose malfunction or limitations likely contributed to a fatal accident causing loss of life. The article details the circumstances and technical concerns related to the AI system's performance and safety features. Since the AI system's use and possible malfunction directly or indirectly led to harm (fatalities), this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

五问小米SU7"爆燃事故"

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent driving assistance) whose use and possible malfunction or limitations directly contributed to a fatal accident causing three deaths. The AI system was active and issuing warnings before the collision, and the driver had to take over control shortly before impact. The incident includes harm to persons (fatalities), which meets the criteria for an AI Incident. The involvement of the AI system in the development, use, and possible malfunction leading to harm is clear. The event is not merely a potential hazard or complementary information but a realized incident with direct harm caused by the AI system's role in the accident.
Thumbnail Image

小米汽车公布su7高速燃爆事故数据记录 NOA发出风险提示到事故仅五秒

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (NOA intelligent assisted driving) that was active during the accident. The AI system detected obstacles and issued warnings but was unable to prevent the collision, which caused significant harm. The event directly links the AI system's use and its limitations to the harm caused by the accident. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly led to injury or harm to persons (or at least a serious traffic accident with potential injury and property damage).
Thumbnail Image

小米SU7事故考验纯视觉智驾边界 激光雷达路线价值或迎重估

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle uses an AI system for autonomous driving based on pure vision (camera) sensors without LiDAR. The accident occurred because the AI system failed to timely detect and respond to irregular road obstacles in a low-light, complex construction zone, leading to a collision at high speed. This is a direct malfunction of the AI system during its use, causing physical harm risk and financial harm (stock price drop). The event clearly involves an AI system, its malfunction, and realized harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7车辆发生事故后为什么会起火?车门是否能打开?雷军、小米汽车最新回应

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with an AI system (NOA intelligent assisted driving) at the time of the accident. The system detected obstacles, issued warnings, and initiated deceleration, but the driver took over and the vehicle still collided with a barrier, leading to a fatal fire. The AI system's operation and its interaction with the driver are central to the incident. The harm (three deaths) is direct and severe. The official investigation and data submission confirm the AI system's involvement. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

东方财富财经早餐 4月2日周三

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article references AI systems and technologies (e.g., autonomous driving, AI robots, AI model financing) but does not describe any event where AI use or malfunction has caused harm or disruption. The autonomous driving accident is described factually with no explicit attribution of harm caused by AI malfunction; the driver took over control and the incident is under investigation. Other AI mentions relate to product launches, investments, or policy updates without harm. Thus, the article provides supporting information and ecosystem context rather than reporting an AI Incident or Hazard. It fits the definition of Complementary Information as it enhances understanding of AI developments and responses without introducing new harm or plausible harm.
Thumbnail Image

SU7事故发生后 小米官方为什么不联系家属?小米汽车回应

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The system detected obstacles, issued warnings, and initiated deceleration but did not prevent the collision. The accident caused physical harm and property damage, fulfilling the criteria for an AI Incident. The report also discusses the system's limitations and the investigation status, but the core event is a realized harm linked to AI system use and malfunction.
Thumbnail Image

小米汽车事故5大疑问待解 雷军深夜发声:"我必须站出来 代表小米承诺!"

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system active during the accident. The AI system's detection, decision-making, and handover to the human driver are central to the incident. The failure of the AI system to prevent the collision, the short reaction time for the driver after AI disengagement, and the non-activation of the automatic emergency braking system are all factors contributing to the fatal outcome. The harm (death of three individuals) has occurred and is directly linked to the AI system's use and malfunction. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

事故前2秒提示障碍物!小米SU7碰撞爆燃 智能辅助驾驶可靠吗?

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The system detected obstacles, issued warnings, and attempted to decelerate, but the driver took over too late, and the vehicle collided with a barrier causing fatalities. This directly links the AI system's use and its limitations to the harm caused. Therefore, this qualifies as an AI Incident due to direct harm to persons resulting from the AI system's involvement in the accident.
Thumbnail Image

小米SU7事故五大疑点!反应时间、起火、AEB、应急门把手、高阶智驾......多位专家深度解析!有激光雷达是否不同,实测视频→

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating with an AI-assisted driving system (NOA intelligent assisted driving) at the time of the accident. The article details how the AI system issued risk warnings but did not trigger emergency braking, and the driver had very limited time to react before the collision and subsequent fire that caused three deaths. The AI system's failure to adequately intervene or prevent the accident is a direct contributing factor to the harm. The discussion of the AI system's limitations, reaction time, and failure to activate AEB confirms the AI system's role in the incident. Hence, this event meets the criteria for an AI Incident as the AI system's use directly led to injury and death.
Thumbnail Image

小米汽车高速悲剧背后:2月刚完成全量推送"无图端到端"智驾

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving system (NOA) with emergency braking and collision avoidance features. The system's failure to detect road construction barriers and adapt to lane changes contributed directly to a severe accident with physical harm. The article details the AI system's design choices (no high-definition maps, reliance on visual sensors) and their role in the incident. The harm is realized (accident, fire, possible injuries or fatalities), not just potential. Hence, this is an AI Incident due to the AI system's malfunction and use leading to injury and harm to persons.
Thumbnail Image

市值缩水800亿 、"AEB介入"等尚存疑 小米SU7迎来"至暗时刻"

2025-04-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Xiaomi SU7's autonomous driving and driver assistance features including NOA and AEB) whose use directly led to a serious traffic accident causing harm. The AI system detected obstacles and issued warnings but did not prevent the collision, possibly due to limitations in the AI's perception and decision algorithms. The accident and its consequences (including financial loss and safety concerns) constitute harm. The AI system's malfunction or limitations are a contributing factor to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7事故五大疑点待解!多位专家深度解析:物理按键很有必要保留,AEB未触发未必因距离较短

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved an AI system (NOA intelligent assisted driving and AEB) that was active during the accident. The AI system detected risk and issued warnings but failed to prevent the collision, and the AEB did not trigger, contributing to the accident and fatalities. The article explicitly connects the AI system's performance and limitations to the harm (death of three people, vehicle fire). The AI system's malfunction or insufficient capability is a direct contributing factor to the incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

遇难者家属回应雷军:鲜活生命已经离世 车企应当对悲剧有足够的敬畏之心

2025-04-02
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's intelligent driving system with AEB and collision warning features) whose use in the vehicle is directly linked to a fatal accident causing loss of life, which is a clear harm to persons. The discussion about whether the AI safety features triggered or failed to trigger, and the vehicle's behavior during the crash, indicates the AI system's involvement in the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use or malfunction.
Thumbnail Image

小米SU7车祸背后 引发智驾安全"风暴"

2025-04-03
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance, NOA) that was actively used during the accident. The system's late warning and short takeover time contributed to the collision and subsequent deaths, constituting harm to persons. The article explicitly links the AI system's performance and design limitations to the accident, including regulatory and safety concerns. This meets the criteria for an AI Incident, as the AI system's use directly led to injury and death. The discussion of broader safety and regulatory issues supports the classification but does not override the primary incident classification.
Thumbnail Image

小米SU7车祸致3人死 所谓"智驾"遭质疑 | 智能驾驶 | 电动汽车 | 安全 | 大纪元

2025-04-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving assistance system (NOA and AEB) in the Xiaomi SU7 vehicle. The system's use and malfunction directly led to a fatal car crash causing three deaths, which is a clear harm to persons (harm category a). The article details the timeline of AI system warnings, driver takeover, and collision, showing the AI system's pivotal role in the incident. The failure of the AI system to adequately detect obstacles and to engage emergency braking contributed to the accident. The event meets the criteria for an AI Incident because the AI system's malfunction and use directly caused injury and death. The article also discusses broader implications and industry practices but the primary focus is the incident itself.
Thumbnail Image

小米SU7爆燃致3死 事故报告引遇难者亲友质疑 | 爆燃事故 | 雷军 | 流量 | 大纪元

2025-04-02
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the fatal accident is equipped with an AI-based intelligent driving system that provides autonomous driving assistance and safety features. The accident report and subsequent discussions focus on the AI system's warnings and actions (NOA and AEB), which were either insufficient or delayed, contributing to the collision and resulting deaths. The AI system's inability to detect certain obstacles and the timing of its warnings are central to understanding the cause and consequences of the accident. Since the AI system's use and malfunction played a direct or indirect role in causing harm (three deaths), this event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

【秦鹏观察】三少女殒命 小米SU7车祸 谁之过? | 智能驾驶 | 大纪元

2025-04-02
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the Xiaomi SU7's NOA intelligent assisted driving system—which was active during the accident. The system detected an obstacle and issued a warning only 2 seconds before collision, with the driver taking over control shortly before impact. The limited reaction time and the system's inability to prevent the crash contributed to the fatal outcome. The deaths of three individuals constitute injury or harm to persons, fulfilling the harm criteria for an AI Incident. The discussion of the AI system's performance, the vehicle's safety features, and the regulatory context further supports the classification. Hence, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

小米公布SU7高速路碰撞爆燃细节 股价跌超5% | 小米SU7 | 跳水 | 大纪元

2025-04-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under the NOA intelligent assisted driving system at the time of the crash, which is an AI system managing vehicle control. The collision and resulting fire caused fatalities, constituting injury and harm to persons. The AI system's operation and its handover to human control are directly linked to the timing and circumstances of the crash. The inability to open the doors and the fire raise further safety concerns possibly related to the vehicle's design and AI system integration. Given the direct causal link between the AI system's use and the fatal harm, this event meets the criteria for an AI Incident.
Thumbnail Image

小米SU7高速路事故遇难者家属发声 雷军回应 | 护栏 | 起火 | 大纪元

2025-04-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's intelligent driving assistance system was active and involved in the events leading to the crash. The system detected obstacles and attempted to slow down, but the vehicle still collided with a guardrail and caught fire. The inability to open the car doors after the crash contributed to the fatalities. This directly links the AI system's use and potential malfunction to harm (death of three individuals). The event meets the criteria for an AI Incident because the AI system's use and possible failure directly led to injury and death, which is harm to persons. The ongoing investigation and public concern about the AI system's safety further support this classification.
Thumbnail Image

燃爆事故致三名女生死亡 小米市值蒸发1200亿 | 安徽 | 铜陵 | 交通事故 | 大纪元

2025-04-02
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Navigate on Autopilot' intelligent driving assistance) during the accident. The AI system's failure to recognize obstacles timely and to trigger emergency braking is linked to the accident's cause, which directly led to the deaths of three individuals. This meets the definition of an AI Incident, as the AI system's malfunction or limitations directly contributed to injury and death. The harm is realized and significant, and the AI system's role is pivotal in the chain of events leading to the fatalities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

SU7爆燃致3人死亡:小米汽车遭遇最严峻的信任危机

2025-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an intelligent driving system (autonomous or semi-autonomous driving features) that was active during the accident. The system's use and possible malfunction or limitations contributed directly or indirectly to the fatal crash. The article details the vehicle's behavior controlled by the AI system, the driver's interaction with it, and the resulting harm—three deaths. This meets the definition of an AI Incident because the AI system's use has directly led to injury and death (harm to persons). Although the full investigation is pending, the article provides sufficient information to classify this as an AI Incident rather than a hazard or complementary information. The event is not unrelated, as the AI system's role is central to the incident.
Thumbnail Image

小米SU7事故遇难者家属回应进展:已与小米工作人员会面 事故仍在调查中

2025-04-02
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was in NOA intelligent assisted driving mode, an AI system for driving assistance, at the time of the accident. The accident caused fatalities, which is a direct harm to persons. The AI system's operation and its interaction with the driver are central to the incident. Hence, this is an AI Incident as the AI system's use directly led to harm (fatalities).
Thumbnail Image

小米SU7事故后 希望所有人明白:智驾永远替代不了人类!

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA intelligent driving assistance system) actively engaged during the accident. The system's delayed risk warning and failure to prevent the collision directly contributed to the fatal harm (deaths of vehicle occupants). The article provides detailed information about the AI system's capabilities and limitations, confirming AI involvement in the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or insufficient performance in a critical scenario.
Thumbnail Image

小米:目前仅能确定事故车并非部分网传的"自燃"

2025-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was active during the accident and influenced vehicle behavior (obstacle detection, deceleration). Although the driver took over before the collision, the AI system's performance and limitations are relevant to the incident. The collision caused physical harm (vehicle damage and fire), which qualifies as harm to property and potential injury risk. Therefore, this qualifies as an AI Incident due to the AI system's involvement in the chain of events leading to harm, even if indirectly.
Thumbnail Image

雷军回应SU7高速碰撞爆燃致3人死亡事故:无论发生什么 小米都不会回避

2025-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating in NOA intelligent assisted driving mode, which is an AI system for autonomous driving assistance. The accident caused three fatalities, which is a direct harm to persons. The AI system detected obstacles and issued warnings but the driver took over control before the collision, indicating the AI system's involvement in the chain of events leading to the harm. The event is a clear example of harm caused directly or indirectly by the use of an AI system, fulfilling the criteria for an AI Incident.
Thumbnail Image

小米回应车祸致3死事件6大质疑:为何不联系家属、车会起火、车门是否能打开

2025-04-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The system detected obstacles, issued warnings, and began deceleration, but the driver took over and the vehicle collided with a barrier, leading to a fatal crash and subsequent fire. The AI system's operation and limitations are central to the incident, which caused injury and death, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the chain of events.
Thumbnail Image

小米SU7爆燃仍有五大疑问待解,网友敦促雷军现身回应

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in NOA intelligent driving assistance mode at the time of the accident, which involved AI-based perception and control functions. The accident caused multiple fatalities, fulfilling the harm criterion. The AI system's warnings, braking functions, and response to obstacles are questioned, indicating potential malfunction or failure in use. The event directly involves an AI system whose operation is linked to the harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

72小时后,雷军发声了

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes a fatal accident involving a vehicle equipped with an AI-based intelligent driving assistance system (NOA). The system was active and controlling the vehicle at the time of the crash, which resulted in three deaths. The AI system's warnings, takeover timing, and emergency braking functions are questioned, indicating potential malfunction or inadequacy. The direct link between the AI system's use and the fatal harm meets the definition of an AI Incident, as the AI system's use has directly led to injury and death. The article also discusses the company's response and public reaction, but the core event is the fatal crash involving the AI system.
Thumbnail Image

小米回应SU7爆燃致3死,股价暴跌近6%

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (NOA intelligent driving assistance) whose use directly led to a fatal accident with three deaths and total vehicle destruction. The AI system's malfunction or limitations contributed to the incident, as the vehicle was in autonomous mode and the system detected obstacles but could not avoid the collision. This meets the criteria for an AI Incident because the AI system's use directly caused harm to persons and property. The detailed investigation and public responses further confirm the AI system's pivotal role in the incident.
Thumbnail Image

小米SU7碰撞前约3秒仍处智驾状态,客服:脱手预警后会自动减速至停车

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA system is an AI-based intelligent driving assistance system that was active during the accident. The system issued warnings and attempted to intervene by decelerating, but the timing and effectiveness of these interventions were insufficient to prevent the collision. This indicates a malfunction or limitation in the AI system's operation or its interaction with the driver, which directly led to harm (the traffic accident). Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly contributed to injury or harm to persons (harm category a).
Thumbnail Image

小米声明,一份恶劣的卸责文本

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the intelligent driving assistance system NOA) in a vehicle that was involved in a fatal accident causing three deaths. The article discusses how the AI system's limitations and the company's handling of the incident contributed to the harm. The AI system's malfunction or inadequate performance (e.g., insufficient warning time, failure to handle construction zone scenarios) is a contributing factor to the deaths. This fits the definition of an AI Incident, as the AI system's use and malfunction directly or indirectly led to injury or harm to persons. The company's statement and response are part of the incident context but do not change the classification.
Thumbnail Image

小米SU7事故遇难者母亲:诸多问题有待回答,期待厘清真相

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system in the Xiaomi SU7 vehicle. The accident caused a fatality, which is a clear harm to a person. The AI system was active and its performance, including the timing of the handover to manual control and the vehicle's response to obstacles, is central to the incident. The description indicates the AI system's malfunction or limitations contributed to the crash and subsequent harm. Hence, this is an AI Incident as per the definition of harm caused directly or indirectly by the use or malfunction of an AI system.
Thumbnail Image

小米SU7致命事故,三大疑点未解

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA intelligent driving system is an AI system involved in the event. The accident caused direct harm to human life (three deaths), fulfilling the criteria for an AI Incident. The AI system's failure to detect obstacles early enough and the short reaction time contributed to the collision. Additionally, the malfunction or design limitations of the AI system (lack of lidar in the standard model) and the vehicle's electronic door locking system (which failed to open after the crash) are linked to the harm. The event is not merely a potential hazard or complementary information but a realized incident with direct harm caused by the AI system's use and malfunction.
Thumbnail Image

新能源汽车生死300秒,够逃出生天吗?

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (NOA intelligent assisted driving) that was active and in use at the time of a fatal crash and fire. The article details the harm caused (three deaths) and discusses the AI system's involvement in the sequence of events leading to the incident. The presence of the AI system and its use directly contributed to the circumstances of the accident. The harm is realized and significant (loss of life and fire hazard). Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

时速112,撞击97,完美复原小米车祸

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving/autonomous driving system) whose malfunction or inadequate performance directly led to a fatal car crash, causing injury and death. The article discusses the use and failure of the AI system in real-world conditions, resulting in harm to a person. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction directly caused harm to a person.
Thumbnail Image

为什么不联系家属?车门是否能打开?小米回应"爆燃致3死"6大质疑

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was in use during the accident. The system detected obstacles and initiated deceleration but the driver took over shortly before the collision. The collision and resulting explosion caused fatalities, which is a direct harm to persons. The AI system's role in the accident, including its detection and response capabilities, is pivotal to understanding the incident. Therefore, this qualifies as an AI Incident because the AI system's use and its limitations or malfunction indirectly led to injury and death.
Thumbnail Image

媒体:小米SU7事故,真相比情绪更重要

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving system) whose use and possible malfunction (late risk warning, door locking) are implicated in a fatal accident causing loss of life. The harm (death of three persons) has occurred, and the AI system's role is pivotal in the chain of events leading to this harm. Although the investigation is ongoing and some details remain uncertain, the direct link between the AI system's performance and the fatal incident justifies classification as an AI Incident rather than a hazard or complementary information. The article focuses on the incident and its consequences rather than on responses or broader ecosystem context, so it is not complementary information.
Thumbnail Image

技术"狂飙"下的生命警示:全民智驾时代谁该为"教育真空"买单?

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an L2 intelligent driving assistance system (NOA) that was active during the accident. The system's failure to adequately perceive road obstacles and to trigger automatic emergency braking, combined with the driver's insufficient reaction time and lack of proper training, directly contributed to the fatal collision and fire. This constitutes direct harm to human life caused by the AI system's malfunction and use. The article also highlights the broader context of inadequate consumer education and regulatory gaps, but the primary classification is an AI Incident because the harm has occurred and is linked to the AI system's development and use.
Thumbnail Image

小米SU7事故遇难者母亲删除相关博文

2025-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with its NOA intelligent assisted driving system active at the time of the fatal crash. The AI system's operation and its interaction with the driver are central to the incident. The accident caused direct harm (three deaths), fulfilling the criteria for an AI Incident. The article details the use and malfunction or failure of the AI system to prevent the accident, and the subsequent fire. Hence, this is not merely a hazard or complementary information but a realized harm caused directly or indirectly by the AI system's use.
Thumbnail Image

小米事故遇难者的男友和母亲已删文

2025-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is an AI-enabled vehicle with autonomous driving features. The accident involved a collision and subsequent explosion causing three deaths. The report mentions that the NOA system was engaged just before the collision, indicating AI system involvement in the vehicle's operation. The fatalities constitute injury or harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm directly linked to the AI system's use or malfunction.
Thumbnail Image

小米SU7事故时疑"车门锁死"?业内人士:若解锁信号无法传输会出现

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA intelligent driving assistance) that was active and issued warnings before the crash. The malfunction or failure of the vehicle's electronic systems, which include AI components, likely contributed to the inability to unlock the doors after the crash, directly impacting the victims' ability to escape and survive. The harm (fatalities and potential injury due to inability to escape) has occurred, and the AI system's malfunction or limitations are a contributing factor. Thus, this event meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

风声|悲剧之后,新能源车设计语言,应该从炫酷转向安全了

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent driving assistance active during the accident. The AI system detected obstacles, issued warnings, and was part of the vehicle's operation leading up to the crash. The harm—fatal injuries due to fire and inability to escape—is realized. The AI system's warnings and control transitions are part of the causal chain, making its role indirect but pivotal. The article also discusses systemic design issues in electric vehicles that exacerbate harm, but the AI system's involvement in the accident and harm is clear. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米汽车进入创立以来最严峻的信任危机

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the intelligent driving (autonomous driving) system in the Xiaomi SU7 vehicle. The accident caused direct harm to human life (three fatalities), which fits the definition of an AI Incident. The article links the harm to the use and possible malfunction or limitations of the AI system, including questions about the system's behavior during the crash and the broader context of overreliance on such technology. The involvement of the AI system in the development, use, or malfunction leading to harm is clear. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

雷军的小米,正小心翼翼地应对追问

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving system active during the accident. The system's operation and its interaction with the driver are central to the incident. The accident caused fatalities, which is a direct harm to persons. The article discusses the AI system's role in the accident and the ongoing investigation into its performance and safety. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (fatalities) and is under scrutiny for safety and reliability issues.
Thumbnail Image

关于 AEB 什么时候不能用,我们翻遍了各家的说明书

2025-04-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article does not report a new AI Incident or AI Hazard. It does not describe an event where AI system malfunction or misuse directly or indirectly caused harm, nor does it describe a plausible future harm scenario from AI systems. Instead, it provides an in-depth explanation and summary of how AI-based vehicle safety systems work, their limitations, and the importance of driver responsibility. This fits the definition of Complementary Information, as it enhances understanding of AI systems and their impact without reporting new harm or risk.
Thumbnail Image

雷军最担心的事情,还是发生了

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system, the NOA intelligent assisted driving system, which was active at the time of the fatal accident. The AI system detected an obstacle and issued a warning, but the collision still occurred shortly after, indicating a malfunction or failure in the AI system's operation. The accident resulted in the deaths of three individuals, which is a direct harm to human health and life. The AI system's role is pivotal in this incident as it was responsible for the assisted driving function that failed to prevent the crash. Hence, this event meets the criteria for an AI Incident due to the direct harm caused and the AI system's involvement in the accident.
Thumbnail Image

"SU7高速碰撞爆燃",小米汽车发布《关于大家关心问题的回答》

2025-04-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the NOA intelligent assisted driving system with features like collision warning and emergency braking (AEB). The system was active during the accident, issued warnings, and began deceleration, but the driver took over and the crash occurred. The crash caused harm (fire, emergency response), fulfilling the criteria for injury or harm to persons and property damage. The AI system's malfunction or limitations (e.g., not responding to certain obstacles) and its use are directly linked to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

智驾请回答,谁在技术和法律之间裸奔?

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an intelligent driving assistance system (高速noa) used on a highway. The system's use directly led to a fatal accident causing loss of life, which is a clear harm to persons. The article discusses the malfunction or limitations of the AI system and the user's failure to properly supervise or take over control, which is a direct causal factor in the incident. The discussion of legal and regulatory gaps and misleading marketing further supports the classification as an AI Incident rather than a hazard or complementary information. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and its role in the accident.
Thumbnail Image

小米SU7车祸案的背后:那些被拿掉的激光雷达

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's autonomous driving system is an AI system as it performs navigation and driving assistance using computer vision. The accident involved the use of this AI system (use phase), which failed to detect or respond adequately to the road construction and lane change requirement, leading to a fatal crash. The harm (death of driver and passengers) is directly linked to the AI system's limitations and its deployment without LiDAR, which is known to provide safety redundancy. The article provides detailed evidence of the AI system's role in the incident, meeting the criteria for an AI Incident. The discussion of other similar accidents and the system's design choices further supports this classification. There is no indication that this is merely a potential risk or a complementary information update; the harm has occurred and is linked to the AI system's use and malfunction.
Thumbnail Image

雷军回应小米SU7高速碰撞事故,官方发文回答相关疑点/携程正式启动 3 天陪娃假/OPPO 发布首个影像品牌

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA assisted driving) actively engaged during a fatal crash, with detailed information about its operation and limitations. The accident caused injury and death, fulfilling the harm criteria. Xiaomi's official statements and investigation focus on the AI system's role and safety features, confirming its direct involvement. Therefore, this qualifies as an AI Incident due to direct harm caused in connection with the AI system's use and malfunction or limitations.
Thumbnail Image

小米汽车事故家属再发声 鼓吹"智驾"谁之过?

2025-04-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaomi SU7's intelligent driving assistance system (Xiaomi Pilot Pro) operating in NOA mode. The system's use and its interaction with the driver directly contributed to the fatal accident, fulfilling the criteria for an AI Incident. The harm is realized (three deaths), and the AI system's malfunction or limitations in handling the road conditions and driver interaction are central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SU7爆燃致3死,小米汽车回应六大疑问→

2025-04-01
南方网
Why's our monitor labelling this an incident or hazard?
The event describes a fatal car accident involving a vehicle operating in an AI-assisted driving mode (NOA intelligent assisted driving) with active safety features (AEB). The AI system was actively engaged and its outputs (warnings, speed adjustments) influenced the vehicle's behavior before the crash. The incident caused direct harm (three deaths), fulfilling the criteria for an AI Incident. The detailed response from Xiaomi and ongoing investigations do not negate the fact that the AI system's use and performance are central to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

评论 6

2025-04-02
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving mode, which was active during the accident. The AI system's use and possible malfunction or limitations in detecting construction signs and obstacles contributed indirectly to the fatal harm (deaths of three persons). This meets the criteria for an AI Incident because the AI system's use directly or indirectly led to injury and death, fulfilling harm to persons. The event is not merely a hazard or complementary information, as the harm has already occurred and is linked to the AI system's operation.
Thumbnail Image

评论 2

2025-04-02
guancha.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle includes an AI system component, the AEB, which is designed to prevent collisions by detecting obstacles and automatically braking. The article discusses the AEB's functionality and limitations, indicating its role in the accident context. The accident resulted in fatalities, which is a direct harm to persons. Since the AI system's use (or potential malfunction or limitation) is directly linked to the harm, this meets the criteria for an AI Incident. The article does not only discuss potential risks but reports an actual harmful event involving an AI system.
Thumbnail Image

2025-04-02
guancha.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the accident is equipped with AI-based intelligent driving features, including emergency braking and collision warning systems. The accident caused the deaths of three individuals, which is a direct harm to human life. The investigation and public discussion focus on whether the AI safety systems malfunctioned or failed to prevent the accident, indicating the AI system's role in the harm. Hence, the event meets the criteria for an AI Incident due to the AI system's involvement in causing injury or harm to persons.
Thumbnail Image

SU7交通事故三人死亡后续 雷军称至今未能接触到事故车辆

2025-04-01
caixin.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (NOA intelligent assisted driving) in use at the time of the accident. The AI system's operation and its interaction with the driver are directly linked to the fatal crash causing deaths, which is a harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death. The description of the AI system's behavior and the accident details support this classification.
Thumbnail Image

小米汽车进入创立以来最严峻的信任危机

2025-04-01
huxiu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving/autonomous driving) that was in use at the time of a fatal car crash. The AI system's involvement is central to the incident, as the article discusses the vehicle's intelligent driving features and the accident occurring during such operation. The harm is direct and severe (three deaths), fulfilling the criteria for an AI Incident. The article also references previous similar incidents and the broader societal and regulatory context, but the primary focus is on the realized harm caused by the AI system's use or malfunction. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【图】为什么会着火 哪些传闻失实 小米汽车官方回答SU7安徽事故疑问_汽车之家

2025-04-01
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was active and influencing vehicle behavior at the time of the accident. The collision and subsequent fire constitute harm to property and potentially to persons. The AI system's operation and its interaction with the driver are directly related to the incident. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly led to harm.
Thumbnail Image

【图】小米高速事故新进展:家属表示已与小米方见面 目前都在等待结果_汽车之家

2025-04-03
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved an AI system that provided safety warnings such as 'please hold the steering wheel' and 'please pay attention to obstacles ahead.' Despite these AI-generated alerts, the vehicle crashed at high speed into a concrete barrier, causing three deaths. The AI system's involvement in the accident is direct, as it was designed to prevent such incidents but failed to do so, leading to injury and death. The ongoing investigation and official statements confirm the AI system's role in the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction or failure to prevent harm.
Thumbnail Image

小米回应车祸致3死事件6大质疑 积极协助调查与善后

2025-04-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assistance) actively engaged during a fatal car crash causing three deaths, which is a direct harm to human health. The system's operation, including obstacle detection and deceleration, is described, indicating AI involvement in the incident. The harm has materialized, and the AI system's role is pivotal in the chain of events leading to the accident. Hence, this is classified as an AI Incident.
Thumbnail Image

小米公布SU7车祸相关数据 遇难者家属:2秒内反应不现实

2025-04-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA intelligent assistance system is an AI system involved in the vehicle's operation. The accident and resulting fatalities constitute harm to persons. The AI system's warnings and control handover timing are critical factors in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and possible malfunction or limitations directly contributed to the harm.
Thumbnail Image

小米SU7惨剧 3个疑团有待揭开 NOA状态仅2秒反应时间

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) in use at the time of a fatal accident. The AI system detected obstacles and alerted the driver but could not prevent the collision. The harm (three deaths) is direct and significant. The incident also highlights potential AI system limitations or malfunctions (short reaction time, inability to avoid collision) and related safety system failures (door lock). These factors meet the criteria for an AI Incident, as the AI system's use and malfunction directly led to injury and death. The event is not merely a hazard or complementary information but a realized harm involving AI.
Thumbnail Image

小米SU7事故有可能避免吗 两秒反应时间够吗?

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (NOA intelligent assisted driving) that was in use and issued warnings and deceleration commands before the crash. Despite these, the accident occurred, leading to fatalities. This constitutes direct harm to persons caused by the development and use of an AI system. Therefore, this event qualifies as an AI Incident under the OECD framework because the AI system's malfunction or limitations contributed to the fatal accident.
Thumbnail Image

小米车祸死者家属:小米没一个慰问电话 家属质疑官方数据

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assistance) that was active during the accident. The system issued warnings and began deceleration, but the driver took over control moments before the collision. The accident caused three deaths, which is a direct harm to persons. The families' doubts about the system's performance and the company's response highlight the AI system's role in the incident. Hence, the event meets the criteria for an AI Incident as the AI system's use and possible malfunction or limitations directly led to fatal harm.
Thumbnail Image

学者:小米在智驾的投入远不如华为!

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's intelligent driving system) whose malfunction and insufficient capability directly contributed to a fatal accident causing harm to people (three deaths). The AI system's failure to detect obstacles in time and to autonomously handle the emergency situation constitutes a direct cause of harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction has directly led to injury and death.
Thumbnail Image

中新社五问小米SU7爆燃事故 事故引发广泛关注

2025-04-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in NOA intelligent assisted driving mode, an AI system that controls vehicle navigation and speed. The accident occurred while the AI system was active, and despite obstacle detection and warnings, the vehicle collided with a barrier and exploded, causing three fatalities. The AI system's involvement in the vehicle's control and the accident's outcome directly links it to harm to persons, fulfilling the criteria for an AI Incident. The detailed description of the AI system's operation and the fatal consequences confirm this classification.
Thumbnail Image

SU7爆燃事故车主质疑小米公布信息 家属寻求更多解释

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The NOA intelligent driving assistance system is an AI system providing autonomous or semi-autonomous driving functions. The accident involved the AI system issuing risk warnings and then the driver taking over control shortly before the collision. The fatalities and injuries are direct harms to persons caused in the context of the AI system's use. The family's concerns about the AI system's response time and the vehicle's locking mechanism further highlight the AI system's role in the incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

小米SU7智驾致3死 母亲称曾多次劝女儿 技术不完善勿盲目信任

2025-04-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the intelligent driving assistance (smart driving) feature of the Xiaomi SU7. The use of this AI system directly led to a fatal accident causing the deaths of three people. The driver engaged the AI system and only took control seconds before the crash, indicating the AI system's failure or limitations contributed to the harm. This meets the criteria for an AI Incident as the AI system's use directly caused injury and death (harm to persons).
Thumbnail Image

五问小米SU7高速爆燃事故 安全疑云待解

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in NOA intelligent assisted driving mode, which is an AI system providing autonomous driving assistance. The accident involved the AI system's use, and the fatal collision caused injury and death to people, fulfilling the harm criteria. The AI system's detection and warnings were part of the event, and the driver took over control before the crash, indicating the AI system's role in the chain of events. Hence, this is an AI Incident due to direct harm to persons caused during AI system use.
Thumbnail Image

博主称最牛的智驾不如新手司机,企业应该如何塑造用户的安全意识

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction (or limitations) of an AI system in intelligent driving (autonomous or assisted driving). The Xiaomi SU7 accident is a real incident where the AI system's limitations contributed to harm (a car crash). The article critiques the misleading marketing that may indirectly lead to harm by causing users to overtrust the AI system, which is a factor in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and its limitations have directly or indirectly led to harm (car accident), and the discussion centers on the consequences and safety awareness related to this harm.
Thumbnail Image

如何看待新势力在自动驾驶领域风险 专家称车企要有效培训消费者

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through the mention of autonomous driving technology, which relies on AI for navigation and decision-making. The incident of the vehicle catching fire after a collision suggests a safety risk linked to the use or malfunction of AI in autonomous driving. Although the article does not detail direct harm caused by AI malfunction, the mention of risks and the incident implies potential or realized harm related to AI use in vehicles. Therefore, this qualifies as an AI Incident due to the direct or indirect harm associated with AI system use in autonomous driving and the discussion of risk management.
Thumbnail Image

小米SU7事故考验纯视觉智驾边界 纯视觉方案遇挑战

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—the pure vision autonomous driving system in the Xiaomi SU7 vehicle. The accident was caused by the AI system's inability to timely detect and respond to obstacles in a low-light, complex environment, leading to a collision. This constitutes direct harm to property and potential injury risk, fulfilling the criteria for an AI Incident. The detailed description of the AI system's failure and the resulting accident confirms the AI system's role in causing harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

小米SU7事故3名大学生遇难 深夜悲剧引发安全警钟

2025-04-03
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the suspicion that the assisted driving system was active during the accident, which is an AI system. The accident resulted in the deaths of three individuals, constituting harm to persons. Although the exact cause is under investigation, the AI system's use and potential malfunction or failure to act are directly linked to the incident. Therefore, this qualifies as an AI Incident due to the direct or indirect role of the AI system in causing harm.
Thumbnail Image

死者家属回应雷军:希望有更详细说法 要求公布更多事实

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Xiaomi's intelligent driving system) whose use in the vehicle is directly linked to a fatal accident causing loss of life. The accident and the subsequent investigation indicate that the AI system's malfunction or failure may have contributed to the harm. Since the harm (deaths) has already occurred and the AI system's involvement is central to the incident, this qualifies as an AI Incident under the framework.
Thumbnail Image

小米SU7事故遇难者母亲清空相关博文 家属等待调查结果

2025-04-03
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-based intelligent assisted driving system (NOA) at the time of the accident. The system's detection and response to obstacles, as well as the transition to manual control, are central to the event. The collision and resulting deaths constitute harm to persons, directly linked to the AI system's use and possible malfunction or limitations. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in a fatal accident.
Thumbnail Image

小米事故疑车门锁死?业内人士发声 两种情况可能导致锁死

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the NOA intelligent assistance system issuing risk warnings and the vehicle's electronic door locking system possibly failing due to damage or power loss, which prevented escape. The AI system's involvement in the accident is indirect but significant, as it relates to the vehicle's safety features and their failure or limitations in an emergency. The harm (three fatalities) has occurred, meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm linked to AI system use and malfunction.
Thumbnail Image

小米汽车爆燃事故掀开智驾困局 智能驾驶宣传引质疑

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The vehicle was operating under an AI-based intelligent driving assistance system at the time of the accident. The system detected obstacles and issued warnings, but the collision still occurred shortly after the driver took control, resulting in fatalities and a fire. The AI system's failure to prevent the collision or provide sufficient time for safe intervention directly led to harm (deaths and property damage). This meets the criteria for an AI Incident as the AI system's use and malfunction directly caused injury and harm to persons.
Thumbnail Image

专家称关键要调查SU7车辆碰撞前5秒 聚焦夜间高速施工路段事故原因

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA intelligent assisted driving system qualifies as an AI system because it performs real-time driving assistance, including obstacle detection, speed adjustment, and issuing warnings. The accident resulted in fatalities, which is a direct harm to persons. The AI system was active and its performance or malfunction is central to understanding the cause of the accident. Therefore, this event meets the criteria for an AI Incident, as the AI system's use directly led to harm (fatal injuries).
Thumbnail Image

死者家属:小米SU7技术不成熟何必要卖?

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assistance) that was active and issued warnings before the collision. The accident caused direct harm (three deaths), and the AI system's performance and limitations are central to the incident. The family's concerns about the AI technology's immaturity and the company's response further highlight the AI system's role in the harm. Hence, this is an AI Incident as the AI system's malfunction or insufficient performance directly led to injury and death.
Thumbnail Image

遇难者家属称雷军发声"虚伪" 避重就轻引发质疑

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was in use during a fatal accident causing multiple deaths, which constitutes direct harm to persons. The family's concerns about the AI system's warnings, the transition from AI to human control, and the failure of safety features indicate the AI system's role in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and possible malfunction directly led to harm (fatalities).
Thumbnail Image

事故后车门能否打开?小米官方回应6个疑问 全面解答公众关切

2025-04-02
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's NOA intelligent assisted driving system, an AI system, which was active during the accident. The system detected obstacles, issued warnings, and began deceleration, but the driver took over before the collision. The accident caused physical harm (vehicle damage and fire) and potential injury to occupants. The AI system's operation and its interaction with the driver are directly linked to the incident. Hence, this event meets the criteria for an AI Incident due to the AI system's involvement in causing harm.
Thumbnail Image

雷军首度回应小米SU7高速爆燃事故:承诺不会回避、持续配合调查,遇难者家属留言"望兑现承诺"

2025-04-02
finance.china.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with an AI-based intelligent driving assistance system (NOA mode) at the time of the accident. The system detected obstacles and attempted to reduce speed, but the driver took over and the vehicle collided with a barrier, causing fatalities. The AI system's involvement in the vehicle's operation and the accident is explicit, and the harm (death of passengers) is direct. Therefore, this qualifies as an AI Incident due to the AI system's use and its role in the chain of events leading to fatal harm.
Thumbnail Image

中国电动车突然自燃 自行启动冲向人群!又出事了!3名女大学生遇难!(视 - 社会百态 -

2025-04-03
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Xiaomi's NOA intelligent driving system) in the vehicle involved in the fatal accident. The AI system was active and failed to prevent the collision, and the vehicle's doors locked automatically, preventing escape, which directly contributed to the deaths of three people. This meets the definition of an AI Incident because the AI system's malfunction and use directly led to injury and death (harm to persons). The article also discusses other EV fire incidents and safety concerns related to AI-driven autonomous features, reinforcing the presence of AI-related harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

奇客Solidot | 小米 SU7 发生涉及智驾的致命车祸

2025-04-02
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-assisted driving mode (NOA intelligent assisted driving) at the time of the fatal crash. The AI system detected obstacles and issued warnings, but the accident still happened, leading to three deaths. This is a direct harm to human life linked to the use of an AI system in a real-world scenario. Therefore, this qualifies as an AI Incident due to injury and harm to persons caused directly or indirectly by the AI system's use and possible malfunction or limitations.
Thumbnail Image

小米通报小米SU7高速爆燃细节,车门锁死等核心问题未提!受害人家属发声:家里已经塌了!律师谈责任划分:小米、驾驶员、施工方都可能担责-汽车频道-和讯网

2025-04-01
和讯网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-assisted driving system (NOA) at the time of the accident, which is an AI system as it provides intelligent driving assistance. The accident caused direct harm (three deaths) and involved the AI system's use and possible malfunction or limitations (e.g., system warnings, transition to manual control, door locking after collision). The AI system's role is pivotal in the chain of events leading to harm, including the timing of driver takeover and system alerts. Therefore, this event meets the criteria for an AI Incident due to direct harm caused and the AI system's involvement in the incident.
Thumbnail Image

小米SU7事故五大疑点待解!多位专家深度解析:物理按键很有必要保留,AEB未触发未必因距离较短-汽车频道-和讯网

2025-04-02
和讯网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the fatal accident was operating under an AI-based intelligent driving assistance system (NOA) at the time of the crash. The system issued risk warnings and requested driver intervention, but the automatic emergency braking (AEB) did not activate to prevent the collision. Experts cited limitations of the AI system's pure vision sensing, especially at high speed and night conditions, as well as insufficient reaction time for the driver after the AI warning. The AI system's failure to intervene actively and the reliance on driver takeover directly contributed to the accident and fatalities. This meets the definition of an AI Incident, as the AI system's use and malfunction directly led to injury and death. The article also discusses systemic safety issues and regulatory standards, but the core event is the fatal accident linked to the AI system's performance.
Thumbnail Image

小米公布SU7高速上碰撞爆燃事件细节;紫光展锐完成股改|数智早参-科技频道-和讯网

2025-04-02
和讯网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-based intelligent driving assistance system at the time of the accident. The system detected obstacles and attempted to slow the vehicle, but the driver had to intervene. Despite these measures, a collision occurred causing fatalities and vehicle explosion. This clearly meets the definition of an AI Incident because the AI system's use and limitations directly contributed to injury and death. The other news items do not describe any AI-related harm or plausible harm and are thus unrelated.
Thumbnail Image

关于 AEB 什么时候不能用,我们翻遍了各家的说明书

2025-04-03
爱范儿
Why's our monitor labelling this an incident or hazard?
The content focuses on explaining the operational scope, limitations, and conditions under which AI-based vehicle safety systems function, without describing any actual harm or accident caused by these systems. It does not report a specific event where the AI system's development, use, or malfunction led to injury, property damage, rights violations, or other harms. Nor does it describe a credible risk of future harm from these systems beyond general cautionary advice. Therefore, the article is best classified as Complementary Information, as it provides context and understanding about AI systems in vehicles and their safety implications, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

小米 SU7 高速事故后起火导致 3 人死亡,官方回应

2025-04-03
爱范儿
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with its AI-based NOA navigation assistance system engaged, which is an AI system as it performs autonomous driving assistance functions such as speed control, obstacle detection, and driver alerts. The accident and subsequent fire caused the deaths of three people, constituting injury or harm to persons. The AI system's involvement in the event is direct, as it was controlling the vehicle and issuing warnings prior to the crash. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the fatal harm caused.
Thumbnail Image

小米市值两天缩水超1200亿港元 股价较历史高点跌超25%

2025-04-03
太平洋汽车网
Why's our monitor labelling this an incident or hazard?
The article explicitly links the accident to Xiaomi's SU7 vehicle, which is described in the context of intelligent driving technology, including references to AEB and sensor technologies, indicating the presence of AI systems. The crash resulted in the death of three people, constituting injury or harm to persons. The AI system's malfunction or failure to prevent the accident is a direct factor in the harm. Hence, this is an AI Incident as per the definition, since the AI system's use or malfunction directly led to harm to persons.
Thumbnail Image

时速112,撞击97,完美复原小米车祸

2025-04-02
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assistance) whose use and limitations directly contributed to a fatal car crash, causing injury and death (harm to persons). The AI system's malfunction or insufficient capability to handle unexpected obstacles and road conditions, combined with driver panic, led to the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's involvement in vehicle operation and safety.
Thumbnail Image

三个女孩丧命,还可以"相信雷总"吗

2025-04-03
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Xiaomi SU7's intelligent driving system operating in NOA mode) whose malfunction or limitations directly contributed to a fatal accident causing harm to people (three deaths). The AI system's failure to timely detect and respond to road obstacles and the vehicle's safety design issues led to the incident. The article discusses the AI system's development, use, and malfunction aspects, and the harm is clearly realized. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米车祸后,智驾回归理性

2025-04-03
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving assistance system (NOA) with hardware and software components (NVIDIA DRIVE Orin chip, radar, cameras, VLM software). The AI system's malfunction or limitation in recognizing and responding to a complex, rare road scenario (a detour with a concrete barrier) directly contributed to the fatal accident. The harm (loss of life) has occurred and is directly linked to the AI system's failure to adequately handle the situation, despite warnings and partial intervention. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and malfunction directly led to injury and death (harm to persons).
Thumbnail Image

小米SU7高速爆燃致死事件多个问题待解,过度宣传智驾有误导之嫌

2025-04-01
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Xiaomi's intelligent driving assistance (Xiaomi Pilot Pro) operating in assisted driving mode. The system detected obstacles and issued warnings but failed to prevent the collision and subsequent fatal fire. The harm (three deaths) directly resulted from the AI system's use and its failure to adequately prevent the accident. The article details the AI system's involvement in the accident and the resulting fatalities, meeting the criteria for an AI Incident due to direct harm to persons caused by the AI system's malfunction or limitations during use.
Thumbnail Image

市值缩水800亿 、"AEB介入"等尚存疑,小米SU7迎来"至暗时刻

2025-04-01
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA and AEB driver assistance features) whose use and possible malfunction or limitations contributed to a serious traffic accident causing harm. The AI system was active and issuing warnings but did not prevent the collision, and there is expert analysis about the system's detection and decision-making capabilities. The harm includes physical damage and potential injury, as well as significant economic and reputational harm to the company. This meets the criteria for an AI Incident because the AI system's development, use, or malfunction directly or indirectly led to harm.
Thumbnail Image

小米SU7高速上碰撞爆燃致3人死亡,四大疑问待解

2025-04-01
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating with AI-based driving assistance (NOA and AEB) at the time of the crash. The AI system's detection and response to the obstacle, as well as the timing and effectiveness of braking, are central to the incident. The accident caused three fatalities, which is a direct harm to persons. The article discusses the AI system's role in the event, including whether it functioned adequately and the timing of warnings and braking. These factors indicate that the AI system's use and possible malfunction or limitations contributed to the harm. Therefore, this event meets the criteria for an AI Incident, as the AI system's use directly or indirectly led to injury and death.
Thumbnail Image

小米SU7事故4个疑点还原!死者家属透露:路人砸窗拉出了后排乘客

2025-04-02
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NOA intelligent driving assistance) during the accident, with detailed data on system warnings and driver interventions. The crash and resulting fatalities constitute direct harm to persons, fulfilling the criteria for an AI Incident. The AI system's malfunction or limitations (e.g., late risk warning, non-activation of AEB) are pivotal factors in the incident. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7高速碰撞起火后,有车主称曾在同路段智驾发生事故,当地回应

2025-04-02
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the NOA intelligent driving assistance) whose use is linked to a fatal accident and a prior accident on the same road segment. The AI system's failure to detect or warn about the construction zone and altered traffic conditions contributed to the collisions. The harm includes death and injury to vehicle occupants, fulfilling the criteria for an AI Incident. The involvement is through the use and possible malfunction or inadequacy of the AI system in a critical driving context, leading directly or indirectly to harm. The report also discusses insufficient road signage and navigation alerts, which combined with the AI system's limitations, caused the incidents. Hence, this is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

小米SU7事故五大疑点待解!多位专家深度解析

2025-04-02
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-assisted driving mode (NOA system) at the time of the accident. The article details how the AI system issued risk warnings but did not trigger emergency braking, and the driver had very limited time to react before collision and fire occurred. The AI system's failure to intervene actively and the limitations of the pure vision-based sensing system are highlighted as contributing factors. The accident caused three deaths, which is a clear harm to persons. The AI system's development, use, and malfunction are central to the incident, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

从检测到障碍物,到碰撞发生,总计2~4秒!小米汽车起火致死事故,这几秒钟发生了什么? 2025-04-01 19:06

2025-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The vehicle was operating under an AI system (NOA intelligent assisted driving) that detected obstacles, issued alerts, and initiated deceleration. The driver took over control shortly before the collision, but the AI system's outputs and actions were part of the chain of events leading to the fatal crash. The harm (three deaths) is direct and significant. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction (or limitations) directly contributed to the harm.
Thumbnail Image

专题 | 雷军发文回应:无论发生什么,小米都不会回避!小米汽车官方回应"小米SU7高速碰撞爆燃事故"

2025-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as NOA intelligent assisted driving, which was active during the accident. The AI system detected obstacles and attempted to slow down but ultimately failed to prevent the collision. The accident caused physical harm or injury, fulfilling the criteria for an AI Incident. The AI system's malfunction or limitations in this real-world use case directly contributed to the harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

刚刚,雷军回应安徽SU7事故!小米汽车官方也做出回应 2025-04-01 22:42

2025-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The AI system's operation and its interaction with the driver are described, and the accident caused fatalities, which is a direct harm to persons. The AI system's role in the accident is central, as it was controlling the vehicle and responding to obstacles before the collision. Therefore, this qualifies as an AI Incident due to direct harm caused during the use of an AI system in a vehicle.
Thumbnail Image

3女大生遭锁死车内烧到碳化 小米公布数据只有2秒能反应

2025-04-02
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assist) actively engaged in vehicle control and risk warning. The accident resulted in the deaths of three people due to the collision and fire, with the AI system's limited reaction time and transition to human control being a critical factor. The harm (fatal injuries) is directly linked to the AI system's use and its inability to prevent the accident or provide sufficient time for human intervention. Therefore, this is an AI Incident as per the definitions, involving injury or harm to persons caused directly or indirectly by the AI system's use and malfunction.
Thumbnail Image

【紫牛头条】小米SU7高速上碰撞爆燃致3名女大学生身亡!家属质疑:为何自燃?为何车门无法打开?

2025-04-01
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating under an AI-based intelligent driving assistance system (NOA) at the time of the accident, indicating AI system involvement. The accident led to the deaths of three people, constituting injury or harm to persons. The family's concerns about the vehicle's self-ignition and inability to open doors suggest possible AI system malfunction or design flaws contributing to the harm. Therefore, this event meets the criteria for an AI Incident, as the AI system's use and potential malfunction directly led to significant harm (fatalities).
Thumbnail Image

遇难者家属:雷军发声"虚伪" 要为女儿讨公道新快报综合2025-4-2

2025-04-02
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the accident is equipped with AI-based intelligent driving systems such as forward collision warning and emergency braking (AEB). The accident caused the death of three occupants, which is a direct harm to human life. The discussion about potential malfunction or failure of the AI driving system to detect obstacles or respond appropriately, as well as the vehicle's door locking and fire, indicates the AI system's involvement in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and possible malfunction have directly led to injury and death. The event is not merely a hazard or complementary information, but a realized harm event involving AI.
Thumbnail Image

小米通报SU7高速碰撞爆燃事件细节,车主、家属、专家多方发声→新快报综合2025-4-1

2025-04-01
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system active during the accident. The system's operation and possible limitations contributed to the collision and fatal fire, causing injury and death. The harm is direct and significant (three fatalities). The involvement of the AI system in the vehicle's control and the subsequent accident meets the criteria for an AI Incident. The article also discusses systemic issues with assisted driving technology and battery safety, but the primary classification is based on the realized harm caused by the AI system's use and malfunction in this incident.
Thumbnail Image

小米SU7高速碰撞爆燃致三死,家属崩溃:为何不给我们一个电话?羊城派2025-4-1

2025-04-01
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is a modern vehicle likely equipped with AI-based systems for collision detection and battery management. The article states that a collision alert was sent to a bound phone, indicating AI or automated system involvement. The fatal harm (three deaths) resulted from the collision and subsequent fire, with family members alleging that the vehicle's safety design defects (such as door locking and battery explosion) contributed to the inability to escape. This suggests a malfunction or failure in AI-related safety systems. Since the AI system's malfunction or design is directly linked to the harm, this event meets the criteria for an AI Incident.
Thumbnail Image

小米股价从抛售潮中反弹 | 联合早报

2025-04-02
早报
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a fatal accident involving a Xiaomi electric vehicle equipped with intelligent driving software, which is an AI system. The accident caused three deaths, constituting injury or harm to persons, fulfilling the harm criteria for an AI Incident. The police investigation and market reaction further confirm the seriousness of the incident. The AI system's involvement is direct as it is part of the vehicle's intelligent driving capabilities, which are under scrutiny for potential faults contributing to the accident. Hence, this event is classified as an AI Incident.
Thumbnail Image

小米电动车安徽撞栏爆燃三人死 股价大跌超5% | 联合早报

2025-04-01
早报
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) in use at the time of the accident. The AI system's operation and interaction with the driver are central to the incident. The collision and subsequent fire caused three fatalities, which constitutes injury or harm to persons. Therefore, this qualifies as an AI Incident because the development, use, or malfunction of the AI system directly or indirectly led to harm (fatalities).
Thumbnail Image

雷军回应小米电动车致命事故:无论发生什么都不会回避 | 联合早报

2025-04-02
早报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent driving assistance system active during the accident. The accident caused fatal injuries to three people, fulfilling the harm criteria for an AI Incident. The AI system's malfunction or failure to prevent the collision is a direct factor in the harm. The event is not merely a potential risk but a realized harm, so it is not an AI Hazard. It is not complementary information or unrelated, as the core of the article is the fatal accident linked to the AI system's operation.
Thumbnail Image

男友和母亲删除相关微博 小米SU7事故最新进展

2025-04-03
杭州网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is an AI system (an AI-enabled vehicle) involved in a fatal accident causing three deaths, which is a direct harm to persons. The article discusses the use and malfunction aspects (collision and fire, door locking issues) of the AI system. The harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information. The article does not focus on responses or updates alone but reports on the incident and its consequences.
Thumbnail Image

小米SU7车祸爆燃三名女生遇难,留下三个疑问:事故前2秒才发出接管提醒,智驾是否可信任?电车碰撞后烧得特别快?车门为何打不开?

2025-04-01
杭州网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-assisted driving mode (NOA intelligent assisted driving), which qualifies as an AI system. The accident occurred when the AI system detected an obstacle and issued a takeover alert only two seconds before collision, which was insufficient to prevent the crash. The vehicle then collided with a barrier and exploded, causing fatal injuries to the occupants. This is a direct harm to human life caused by the malfunction or failure of the AI system to adequately manage the driving situation. Therefore, this event meets the criteria for an AI Incident due to direct harm to persons resulting from the AI system's use and malfunction.
Thumbnail Image

小米SU7车祸惨剧,3个疑团有待揭开!三名女生遇难,事故前2秒才发出接管提醒,智驾是否可信任?电车碰撞后烧得特别快?车门为何打不开?

2025-04-02
杭州网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating in an AI-assisted driving mode (NOA intelligent assisted driving) at the time of the accident. The AI system detected an obstacle and issued a takeover alert shortly before the crash, but the collision and fatalities occurred nonetheless. This indicates the AI system's involvement in the event's causation, either through its performance or the timing of alerts, which directly led to harm (death of three individuals). Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the fatal harm caused.
Thumbnail Image

离谱!从智驾发出接管提醒到车祸发生仅有2秒,3名大四女生被烧死!小米公布事故细节→_产业经济_财经_中金在线

2025-04-01
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NOA intelligent assisted driving system, which was active and issuing warnings before the crash. The driver took over control shortly before the collision, but the AI system's operation and the circumstances of the accident directly contributed to the fatal harm. The harm is realized (three deaths), and the AI system's role is pivotal in the chain of events leading to the incident. Therefore, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

小米SU7事故遇难者母亲删博文,家属与小米工作人员会面

2025-04-03
中国经济网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating in an AI-assisted driving mode (NOA intelligent assisted driving) at the time of the accident. The AI system detected obstacles and attempted to slow down the vehicle, but the vehicle still collided with a concrete barrier, leading to fatal injuries and a subsequent fire caused by the collision. The AI system's involvement in the vehicle's control and the accident's circumstances indicate that the AI system's use and possible malfunction contributed to the harm. Therefore, this qualifies as an AI Incident due to direct harm to persons caused by the AI system's operation and failure to prevent the accident.
Thumbnail Image

电动车智能辅助驾驶亟需从严限速_辣言辣语_红辣椒评论

2025-04-03
红网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the NOA intelligent assisted driving system, which was active during the accident. The AI system's use and its limitations in emergency response contributed indirectly to the fatal crash. The article describes realized harm (three deaths) linked to the AI system's operation and its failure to prevent or mitigate the accident effectively. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in causing harm to persons.
Thumbnail Image

小米官方为何不联系家属?车门是否能打开?小米汽车回应六大质疑

2025-04-02
华龙网
Why's our monitor labelling this an incident or hazard?
The article discusses the use of an AI system (NOA intelligent driving assistance) involved in a traffic accident that resulted in harm (collision and fire). However, the article itself is a company statement addressing questions and clarifying facts rather than reporting a new incident or hazard. The harm has already occurred, and the AI system's role is described, but the article's main focus is on responding to public concerns and providing information about the investigation and system behavior. Therefore, this is Complementary Information, as it updates and contextualizes a known AI Incident rather than reporting a new one or a potential hazard.
Thumbnail Image

刚刚 雷军发声!-证券之星

2025-04-01
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) actively used at the time of a fatal accident causing loss of life, which is a direct harm to persons. The AI system's malfunction or limitations in handling the road conditions (construction, lane closure, obstacle detection, and vehicle control) contributed to the crash. Xiaomi's response and cooperation with authorities are complementary but do not negate the fact that the AI system's use led to a serious harm. Hence, this is classified as an AI Incident.
Thumbnail Image

刚刚,雷军发声!-证券之星

2025-04-01
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) actively used during the accident, which directly led to fatal injuries. The article describes the AI system's role in the vehicle's behavior before and during the crash, the company's response, and ongoing investigation. Since the AI system's use has directly resulted in harm (deaths), this qualifies as an AI Incident under the framework.
Thumbnail Image

小米汽车爆燃致3人死亡 碰撞提示时间不足、车门锁死、AEB触发等疑团待解-证券之星

2025-04-01
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving assistance) whose use directly preceded and contributed to a fatal accident causing three deaths, fulfilling the criteria for an AI Incident. The AI system's warnings and control transitions were part of the event timeline, and the system's limitations or failures (e.g., insufficient reaction time, possible failure to trigger AEB) are implicated in the harm. The harm is realized (fatalities), and the AI system's role is pivotal. Therefore, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

五问小米SU7"爆燃事故"

2025-04-01
China News
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (NOA intelligent driving assistance) actively engaged during the accident. The AI system detected obstacles, issued warnings, and requested deceleration, but the vehicle still collided with a barrier and caught fire, causing fatalities. The AI system's role in the accident is direct or indirect, as it was controlling or assisting vehicle operation at the time. The harm is realized (three deaths), and the event includes potential system malfunction or design issues (e.g., door lock failure). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and possible malfunction directly led to harm to persons.
Thumbnail Image

雷军回应SU7高速碰撞爆燃致3人死亡事故:无论发生什么 小米都不会回避

2025-04-01
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was in use at the time of a fatal accident causing three deaths. The AI system's operation and interaction with the driver are central to the incident. The harm (fatal injuries) has occurred and is directly linked to the AI system's use and the accident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and death.
Thumbnail Image

小米:目前仅能确定事故车并非部分网传的"自燃"

2025-04-01
证券之星
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating with an AI-based NOA intelligent assisted driving system at the time of the accident. The system detected an obstacle and initiated deceleration, but the driver had to take over to avoid the collision. The collision and subsequent fire caused harm to the vehicle and potentially to persons involved. The AI system's role in the event is direct, as it was active and influencing vehicle behavior, but it did not prevent the accident. This fits the definition of an AI Incident because the AI system's use directly led to harm (collision and fire). The event is not merely a hazard or complementary information, as harm has occurred and the AI system was involved in the chain of events.
Thumbnail Image

小米SU7事故遇难者家属回应进展:已与小米工作人员会面 事故仍在调查中

2025-04-02
证券之星
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was using an AI-based intelligent driving assistance system (NOA mode) at the time of the accident. The system detected obstacles and attempted to reduce speed, but the vehicle still collided with a concrete barrier, resulting in fatalities. This shows the AI system's malfunction or limitations contributed directly or indirectly to the harm (loss of life). Therefore, this qualifies as an AI Incident due to injury and harm to persons caused in connection with the AI system's use.
Thumbnail Image

雷军回应小米SU7高速事故:不会回避

2025-04-01
证券之星
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is an AI system as it is an electric vehicle likely equipped with AI-based driving assistance or autonomous features. The accident caused direct harm (death) to three people, fulfilling the harm criteria for an AI Incident. Although the investigation is ongoing and the exact cause is not yet public, the event involves the use of an AI system and has resulted in realized harm. Therefore, it meets the definition of an AI Incident rather than a hazard or complementary information. The event is not unrelated because it involves an AI system and a serious harm event.
Thumbnail Image

2025-04-02
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Xiaomi SU7's NOA intelligent driving assistance) whose use directly led to a fatal car crash and subsequent fire causing the deaths of three individuals. The AI system was active and issued warnings before the collision, but the short reaction time and system performance raise questions about its safety and reliability. The incident also highlights concerns about the battery safety technology, which failed to prevent a post-collision fire. These factors constitute direct harm to persons caused by the development and use of an AI system, fitting the definition of an AI Incident under harm category (a) injury or harm to the health of persons.
Thumbnail Image

小米SU7致命事故 三大疑点未解

2025-04-01
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system active during the accident. The system detected an obstacle and issued warnings but had only 2 seconds for the driver to react, which is below recommended standards. The vehicle collided and caught fire, causing fatalities. The AI system's failure to provide sufficient reaction time and possibly inadequate sensing capabilities (lack of lidar on the standard model) directly contributed to the harm. Additionally, the electronic door lock system, dependent on vehicle power, failed to open, further exacerbating harm. These factors show direct involvement of AI system use and malfunction leading to injury and death, meeting the criteria for an AI Incident.
Thumbnail Image

就在"智能驾驶向自动驾驶进阶"成为行业共识时,小米SU7的事故猛地敲响了现实的警钟 -- -- 技术愿景与落地使用之间的鸿沟清晰可见。

2025-04-01
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (intelligent assisted driving system) that was active during the accident. The system detected obstacles, issued warnings, and attempted to decelerate, but the driver took over too late to prevent the collision. The accident caused fatalities, which is a direct harm to human health. The AI system's role in the accident is pivotal as it was responsible for obstacle detection and warnings, and its limitations or malfunction contributed to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米SU7事故遇难者母亲删博文,家属与小米工作人员会面

2025-04-03
金羊网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating in NOA intelligent driving assistance mode, which is an AI system. The accident resulted in three fatalities, which is harm to persons. The AI system was involved in the vehicle's operation and its failure or limitations contributed to the crash and subsequent fire. The event is a direct example of harm caused by the use and malfunction of an AI system. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

小米SU7高速上碰撞爆燃致3人死亡,四大疑问待解

2025-04-01
金羊网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA and AEB) in the vehicle's operation and discusses its role in the accident. The AI system's failure or insufficient performance in detecting and responding to a static obstacle on a highway at night directly contributed to the collision and subsequent fatalities. The event meets the criteria for an AI Incident because the AI system's use and malfunction have directly led to injury and death (harm to persons). The detailed discussion of AI system performance, braking response, and safety features confirms the AI system's pivotal role in the harm caused.
Thumbnail Image

小米SU7事故有可能避免吗?高速遇障碍物,正确做法是→ 2025-04-02

2025-04-02
金羊网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-assisted driving system (NOA) that detected an obstacle and issued warnings but failed to avoid collision, leading to fatalities. The AI system's delayed recognition and inability to fully avoid the obstacle at highway speeds directly contributed to the harm. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death (harm to persons). The article provides detailed information about the AI system's role, the accident timeline, and expert analysis of system limitations, confirming the AI system's pivotal role in the incident.
Thumbnail Image

小米官方回应安徽事故相关问题!小米汽车回应AEB触发问题 :AEB目前不响应锥桶,水马等障碍物

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article describes a fatal accident involving a Xiaomi vehicle operating in an AI-assisted driving mode (NOA) and using an AI-based AEB system. The AEB system's inability to respond to certain obstacles (cones, barriers) is explicitly mentioned, which is relevant to the accident. The accident caused deaths, which is a direct harm to persons. Therefore, this qualifies as an AI Incident because the AI system's use and limitations directly contributed to the harm.
Thumbnail Image

最新!小米SU7爆燃事故遇难者母亲,删除相关博文 2025-04-03

2025-04-03
金羊网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle uses an AI-based NOA intelligent assisted driving system, which was engaged at the time of the accident. The system detected obstacles, issued warnings, and attempted to reduce speed, but the vehicle still collided with a barrier and caught fire, causing three deaths. The AI system's involvement in the vehicle's operation and the resulting fatal harm to passengers meets the criteria for an AI Incident, as the AI system's use directly contributed to injury and loss of life. The detailed investigation and data logs confirm the AI system's role in the event.
Thumbnail Image

小米通报SU7高速碰撞爆燃事件细节,车主、家属、专家多方发声

2025-04-01
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent driving assistance system used in the Xiaomi SU7 vehicle. The AI system was in use at the time of the accident, and its performance and limitations are directly linked to the collision and resulting fatalities, constituting harm to persons. The article details the AI system's detection of obstacles, driver takeover, and questions about whether emergency braking was applied, indicating the AI system's involvement in the incident. Therefore, this qualifies as an AI Incident because the AI system's use and possible malfunction directly led to injury and death.
Thumbnail Image

小米SU7事故有可能避免吗?高速遇障碍物,正确做法是→

2025-04-02
金羊网
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-based intelligent driving assistance system (NOA) when it encountered a construction obstacle on the highway. The AI system detected the obstacle and issued warnings but was unable to fully avoid the collision due to inherent limitations of L2-level systems at high speeds. The accident resulted in three fatalities, which is a direct harm to persons. The AI system's role in the accident is pivotal as it was responsible for obstacle detection, issuing warnings, and partial control of the vehicle. The event fits the definition of an AI Incident because the AI system's use and limitations directly led to injury and death. The detailed analysis of the system's performance and the accident timeline supports this classification.
Thumbnail Image

小米汽车高速悲剧背后:2月刚完成全量推送"无图端到端"智驾

2025-04-02
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaomi SU7's intelligent driving system, which was active at the time of the accident. The AI system's failure to detect road obstacles and properly respond contributed directly or indirectly to the fatal crash, constituting injury and harm to persons. The article details the malfunction and limitations of the AI system's perception and decision-making capabilities, linking the AI system's use and design to the harm caused. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to harm to people.
Thumbnail Image

小米发布事故车机信息,关键8分钟究竟发生了啥?

2025-04-01
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Xiaomi's NOA advanced driver-assistance system) in the context of a serious traffic accident causing injury or death. The AI system was in use, detected obstacles, and issued warnings, but the driver took over only seconds before the collision. The system's inability to timely recognize the road hazard and the short warning time plausibly contributed to the accident. This fits the definition of an AI Incident because the AI system's use and possible malfunction directly led to harm to persons. The detailed timeline and discussion of system warnings and driver interaction confirm the AI system's pivotal role in the incident.
Thumbnail Image

财经早察| 比"智驾平权"更重要的是"安全平权"

2025-04-03
21jingji.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (the NOA autonomous driving feature) whose use directly preceded a vehicle accident. The AI system's detection of driver behavior and obstacles, and its interaction with the driver, are central to the event. The accident caused harm (implied physical harm or risk thereof), and the AI system's role is pivotal in the chain of events leading to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm or risk of harm.
Thumbnail Image

南财快评:小米Su7出事故,新能源智能车如何行稳致远?

2025-04-03
21jingji.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved an AI system (NOA and AEB) that directly or indirectly contributed to a fatal collision causing three deaths, fulfilling the criteria for an AI Incident. The article details how the AI system's sensor detection range, warning timing, and human-machine coordination issues plausibly led to the crash. The involvement of AI in the vehicle's autonomous assistance features and the resulting harm to human life clearly meet the definition of an AI Incident. The article also discusses systemic safety issues and calls for regulatory standards, but the primary event is a realized harm caused by AI system use, not just a potential hazard or complementary information.
Thumbnail Image

小米回应安徽交通事故:持续配合调查,直面公众关切

2025-04-02
m.21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving) that was active during a fatal traffic accident causing three deaths. The AI system's operation, including obstacle detection and deceleration, is described in detail, indicating its direct involvement. The harm (fatal injuries) has occurred, meeting the criteria for an AI Incident. Xiaomi's cooperation and data provision relate to the investigation of this AI-related harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and the resulting fatalities.
Thumbnail Image

小米汽车解答SU7高速事故六大疑问

2025-04-02
大洋网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA intelligent driving assistance and AEB safety features) in the context of a real accident causing harm (collision and fire). The AI system was in use and its operation influenced the event's outcome. The incident involves harm to persons or property, fulfilling the criteria for an AI Incident. The detailed explanation of AI system behavior and accident circumstances confirms the AI system's involvement in the harm. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

小米SU7事故五大疑点待解,多位专家深度解析

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the accident was operating under an AI-based intelligent driving assistance system (NOA system, L2 level). The system issued risk warnings but did not trigger emergency braking, and the driver had only a very short time to react before the collision. The article details expert analysis on the AI system's limitations, including the pure vision-based approach's inability to detect obstacles timely at high speed and night, and the lack of active intervention by the AI system. The accident caused fatalities and severe harm, directly linked to the AI system's use and malfunction. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction and use directly led to injury and harm to persons.
Thumbnail Image

第一时间探讨小米SU7惨烈车祸 重创智能驾驶产业?汽车行业该反思什么?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is equipped with an AI-based intelligent driving system (highway NOA), which is explicitly mentioned as being active and prompting the driver to take over seconds before the crash. The accident caused three deaths, which is a direct harm to persons. The AI system's malfunction or limitations contributed to the incident, as the driver was alerted but could not avoid the collision due to the road conditions and system design. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to injury and death. The article also discusses the broader impact on the intelligent driving industry and safety concerns, but the primary event is a realized harm caused by an AI system's involvement.
Thumbnail Image

智能驾驶如何守住"安全红线

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent driving assistance system operating the vehicle at the time of the accident. The system detected obstacles and issued warnings but ultimately the vehicle collided with a barrier, resulting in fatalities. The article discusses the system's capabilities and limitations, driver responsibilities, and the consequences of overreliance or misunderstanding of AI driving assistance. The AI system's involvement in the accident is direct and pivotal to the harm caused. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

媒体评小米SU7爆燃事件是否该给狂飙的汽车系上法律的安全带

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 is described as a smart car, implying the presence of AI systems managing vehicle functions. The fatal explosion and the difficulty in emergency egress due to the car's design and AI-related features (e.g., hidden door handles requiring complex mechanical steps after power loss) directly led to loss of life, which is harm to persons. The article also references another AI-related driving incident causing a collision. These facts establish that the AI system's malfunction or design contributed directly or indirectly to the harm, meeting the criteria for an AI Incident.
Thumbnail Image

雷军回应SU7爆燃事故,但还没说到点子上

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving assistance/NOA) whose use and limitations contributed to a fatal crash causing multiple deaths, which is a direct harm to persons. The article explicitly links the AI system's failure to detect obstacles and the short warning time to the accident, as well as driver overreliance on the system. This meets the definition of an AI Incident because the AI system's use and malfunction directly or indirectly led to injury and death. The article also discusses systemic issues in AI driving system marketing and safety, reinforcing the incident classification rather than a mere hazard or complementary information. The presence of realized harm (fatalities) and AI involvement in the chain of causation confirms this classification.
Thumbnail Image

当"财富增长最多"光环遇上"沉默72小时

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as 'intelligent assisted driving' in the Xiaomi SU7 vehicle. The accident caused direct harm to human life (three fatalities), which fits the definition of an AI Incident. The family's allegations about safety defects related to the AI system's operation (e.g., door lock failure, battery fire possibly linked to the system) further support the classification as an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm linked to the AI system's use or malfunction.
Thumbnail Image

南财快评:小米Su7出事故,新能源智能车如何行稳致远?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 accident involved an AI system (NOA and AEB) whose malfunction or limitations in sensing and warning contributed indirectly to a fatal collision, causing injury and death (harm to persons). The AI system's role in the accident is central, as the article focuses on the AI-enabled driving assistance's performance and its impact on the crash. Therefore, this event qualifies as an AI Incident because the AI system's use and possible malfunction directly or indirectly led to significant harm (fatalities).
Thumbnail Image

雷军造车神话遭遇暴击,小米汽车进入至暗时刻

2025-04-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (the intelligent driving system) controlling the vehicle at the time of the accident. The system failed to adequately handle a complex driving environment, gave a short 2-second takeover warning, and the emergency braking system did not activate to prevent or mitigate the crash. The resulting collision and fire caused fatal injuries to the occupants. This direct causal link between the AI system's malfunction/use and the harm to people qualifies the event as an AI Incident under the OECD framework. The detailed description of the AI system's role and the fatal outcome confirms this classification.
Thumbnail Image

小米SU7车祸案的背后:那些被拿掉的激光雷达|新皮层

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaomi Pilot Pro autonomous driving system operating in pure vision mode without LiDAR. The system was active in Navigate on Autopilot mode before the accident and issued a risk warning shortly before the crash. The accident caused fatalities, which is a direct harm to health. The article details how the AI system's perception limitations (pure vision without LiDAR) plausibly contributed to the failure to detect or respond adequately to the road construction and lane change, leading to the crash. Therefore, this qualifies as an AI Incident because the AI system's use and limitations directly and indirectly led to harm (fatalities).
Thumbnail Image

两秒内可能绝处逢生吗?高速遇障碍物该怎么做?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (Level 2 intelligent driving system) that directly contributed to a fatal accident by failing to avoid a static obstacle at high speed. The AI system's delayed recognition and inability to fully avoid the obstacle led to harm (fatalities), fulfilling the criteria for an AI Incident. The article also references other similar incidents and research confirming the AI system's limitations and risks in such scenarios, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"小米SU7爆燃致三死"事件追问 智驾浪潮狂飙突进 技术或引致命幻觉

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent driving assistance) whose use and limitations directly led to a fatal accident causing three deaths. The AI system's failure to detect certain obstacles and the human-machine coordination issues are central to the incident. The harm (fatal injuries) has occurred, and the AI system's role is pivotal in the chain of events leading to the accident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

小米SU7撞击起火致3人死亡 智能驾驶≠自动驾驶

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The vehicle was operating under an AI system (NOA intelligent driving) at the time of the accident, which directly influenced the events leading to the collision and subsequent fire causing fatalities. The AI system issued warnings and attempted to slow the vehicle but the driver took over shortly before the crash. The harm (three deaths) is directly linked to the use and malfunction/limitations of the AI system. Therefore, this is an AI Incident as the AI system's use and its failure to prevent the accident directly led to significant harm (loss of life).
Thumbnail Image

小米SU7交通事故引全民关注 智能驾驶如何守住安全红线

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an intelligent driving assistance system (NOA) operating during the accident. The system's use and its limitations in handling complex road conditions (construction and lane changes) are central to the incident. The accident caused fatalities, which is a direct harm to persons. The article also discusses the system's malfunction or inability to fully manage the situation, the driver's intervention, and the broader implications for AI safety and marketing. Therefore, this is an AI Incident as the AI system's use directly led to harm (fatalities) and raises concerns about safety boundaries and responsible deployment.
Thumbnail Image

小米汽车事故悲剧背后 2月刚完成全量推送"无图端到端"智驾

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system used in autonomous or assisted driving (Xiaomi's intelligent driving system with AEB and NOA). The accident occurred while the AI system was active, and its inability to detect road barriers and adapt to construction zones contributed to the crash. The discussion of sensor configurations and comparison to other vehicles highlights the AI system's limitations and its role in the incident. The harm is physical injury or death resulting from the accident, fulfilling the criteria for an AI Incident. The AI system's malfunction and design choices (e.g., reliance on pure vision without high-definition maps) are directly linked to the harm, not merely a potential risk, so this is not a hazard or complementary information but an incident.
Thumbnail Image

Hi财经丨被安全拷问的小米汽车"流量神话"

2025-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving system in the Xiaomi SU7 vehicle. The system's use and its inability to properly handle road construction and detours directly contributed to a fatal accident causing injury and death, which qualifies as harm to persons. This meets the criteria for an AI Incident because the AI system's malfunction or limitations directly led to injury and death. The article also discusses broader implications for the industry but the primary focus is on the realized harm from the AI system's use in this accident.
Thumbnail Image

高速公路管理中心回应小米汽车事故路况:事故发生后调整了施工状态

2025-04-03
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent driving assistance) in use at the time of the accident. The AI system's detection and response to obstacles, as well as the driver's failure to take over control, are central to the incident. The accident caused direct harm (fatalities), fulfilling the criteria for an AI Incident. The description includes the AI system's role in the chain of events leading to harm, including possible limitations in detecting construction signs and the driver's delayed reaction. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

风声|悲剧之后,新能源车设计语言,应该从炫酷转向安全了

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving system active during the accident. The system's use and its interaction with the driver and environment directly relate to the crash and subsequent fire. The harm is realized: three fatalities occurred due to the fire and inability to escape, which is linked to the vehicle's AI-assisted driving mode and design features influenced by AI integration (e.g., electronic door handles). The article provides detailed timeline data from the AI system and discusses how AI and design choices contributed to the harm. Thus, it meets the criteria for an AI Incident, as the AI system's use and related design factors directly led to injury and death.
Thumbnail Image

小米SU7事故后,智驾还可信吗?

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction (or failure to adequately intervene) of an AI system (the NOA intelligent driving system) that directly led to a fatal traffic accident, causing harm to persons (three deaths). The AI system's role is pivotal as it was active during the accident, issuing warnings that were not effectively acted upon, and questions remain about its safety measures and intervention capabilities. The incident also highlights systemic issues such as lack of regulatory standards and transparency, which contribute to the harm. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use and malfunction have directly led to injury and death, fulfilling the criteria for harm to persons.
Thumbnail Image

十年老司机的忠告:血的教训警示我们,方向盘永远该握在自己手里

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent driving system) whose malfunction and the user's reliance on it directly led to a fatal traffic accident, causing harm to persons. The article details how the AI system's perception failed under complex conditions and how the critical handover period was mishandled, resulting in death. This fits the definition of an AI Incident because the AI system's malfunction and use directly caused harm to people. The article also discusses the broader implications and warnings but the core event is a realized harm caused by AI system failure and use.
Thumbnail Image

别再神化智驾!小米SU7悲剧揭开智驾安全隐患

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system—intelligent driving technology—explicitly mentioned as being used in the Xiaomi SU7 vehicle. The accident caused injury and death (harm to persons) and property damage (vehicle fire). The article details how the AI system's limitations and failures in perception and decision-making contributed to the collision and subsequent harm. User overreliance on the AI system also played a role. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm. The discussion of systemic issues and safety challenges further supports the classification but does not change the primary classification as an incident.
Thumbnail Image

一场车祸引发小米股价两天缩水1200亿港元

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (NOA intelligent assisted driving) in the vehicle at the time of the accident. The system's failure to prevent the crash or respond adequately to obstacles is directly connected to the fatal harm caused. The event involves the use and malfunction of an AI system leading to injury and death, which fits the definition of an AI Incident. The article also discusses the broader implications for AI-assisted driving safety and regulatory gaps, but the core event is a realized harm caused by AI system use.
Thumbnail Image

致命车祸后 小米SU7车主们有话想问雷军

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NOA intelligent driving assistance) during the accident, which was active and issued warnings shortly before the collision. The short reaction time and system limitations in detecting certain obstacles contributed indirectly to the fatal crash. The harm is realized (fatal injuries and vehicle fire), and the AI system's role is pivotal in the chain of events leading to the incident. The investigation is ongoing, but the direct link between AI system use and the fatal harm is evident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

当"财富增长最多"光环遇上"沉默72小时"|热财经

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle involved in the accident is described as having intelligent assisted driving, which reasonably infers the presence of an AI system controlling or assisting vehicle operation. The fatal crash and subsequent fire caused direct harm (death of three persons). The controversy over vehicle safety design, including locked doors and battery explosion, indicates potential malfunction or failure of the AI system or its integration. The event is a clear case where the AI system's use and possible malfunction have directly led to harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

雷军言论引争议,小米SU7智驾技术真的成熟吗?|热财经

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle uses an AI-based intelligent driving system (NOA) that issued warnings but only gave the driver about 1 second to take control before the collision, which was insufficient for safe intervention. The emergency braking system (AEB) did not activate to prevent the crash. The accident resulted in three deaths and involved AI system warnings and controls that failed to prevent harm. The article also discusses the system's design, safety limitations, and regulatory concerns, confirming the AI system's direct and indirect role in the harm. Therefore, this event meets the criteria for an AI Incident due to the realized harm caused by the AI system's malfunction and use.
Thumbnail Image

最新:六大疑问,小米汽车回应了!雷军终于发声!遇难者母亲:希望说到做到

2025-04-02
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent driving assistance and AEB active safety features in the Xiaomi SU7 vehicle. The accident caused fatal injuries (harm to persons) and is directly linked to the AI system's use and performance during the incident. The detailed discussion about the AI system's behavior, the driver's interaction, and the accident's circumstances confirm the AI system's pivotal role in the harm caused. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use and possible malfunction have directly led to significant harm (fatalities).
Thumbnail Image

小米SU7车祸事件,暴露智驾的缺陷!

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the '智驾模式' (intelligent driving mode) in the Xiaomi SU7 vehicle. The AI system's use and its limitations (sensor range, processing power, and safety features) directly contributed to the accident and fatalities. The system's failure to provide sufficient warning and the short reaction window led to the collision and subsequent fire, causing injury and death. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly led to harm to persons.
Thumbnail Image

安徽SU7事故发生后,小米官方为什么不联系家属?小米汽车回应

2025-04-01
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) that was active during the accident. The AI system's warnings and transition to manual control are described, and the collision caused fatal injury, fulfilling the criteria for harm to a person. The AI system's role is pivotal in the incident, as the accident occurred during its operation and transition to manual control. Therefore, this qualifies as an AI Incident. The article also includes Xiaomi's response and investigation cooperation, but the primary focus is the incident and its consequences, not just complementary information or future hazards.
Thumbnail Image

别再盲目"吹"智驾了,小米SU7严重事故背后,是被流量反噬的雷军

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as NOA intelligent assisted driving, which was active and issuing warnings before the crash. The AI system's inability to fully avoid the obstacle and the driver's overtrust in the system led to a fatal collision, fulfilling the criteria for an AI Incident due to direct harm to persons. The detailed timeline and system behavior confirm AI involvement in the incident's causation. The harm is realized (fatalities), not just potential, so this is not a hazard. The article also discusses broader societal and marketing issues but the core event is a direct AI-related fatal incident.
Thumbnail Image

小米汽车:目前AEB不响应锥桶、水马、石头、动物等障碍物

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AEB and NOA) used in Xiaomi vehicles. It discusses the AI system's operational scope and limitations, specifically that AEB does not respond to certain obstacles. While this limitation could plausibly lead to an incident if the vehicle encounters such obstacles and the driver does not intervene, the article does not report any actual harm or accident caused by this limitation. The incident described involved the driver taking over after a NOA alert, preventing harm. Thus, the event is best classified as an AI Hazard, reflecting a plausible risk of harm due to the AI system's current limitations, but no realized harm or incident is described.
Thumbnail Image

小米汽车公布su7高速燃爆事故数据记录 NOA发出风险提示到事故仅五秒

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NOA intelligent assisted driving) whose use directly led to a traffic accident causing harm. The detailed timeline shows the AI system detected the obstacle and issued warnings, but the collision still occurred shortly after. This constitutes an AI Incident because the AI system's operation and its interaction with the human driver were pivotal in the chain of events leading to harm. The report includes data disclosure and investigation but focuses on the incident itself, not just complementary information.
Thumbnail Image

小米SU7,冤不冤?!

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system in the Xiaomi SU7 vehicle. The system detected obstacles and issued warnings but did not prevent the collision, which led to fatalities and injuries, constituting harm to persons. The AI system's role is pivotal as it was the first line of active safety defense, and its limitations or the driver's misuse contributed indirectly to the incident. Therefore, this is an AI Incident as the AI system's use directly or indirectly led to harm to people.
Thumbnail Image

雷军首度回应小米SU7高速爆燃事故:承诺不会回避、持续配合调查,遇难者家属留言"望兑现承诺

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the NOA intelligent assisted driving system, which was active during the accident. The accident caused direct harm (death of three people), fulfilling the criteria for an AI Incident. The AI system's involvement is clear as it was controlling or assisting vehicle operation before the driver took over, and the collision occurred despite system warnings and driver intervention. Therefore, this is an AI Incident due to the direct harm caused linked to the AI system's use and possible malfunction or limitations.
Thumbnail Image

市值缩水800亿 、"AEB介入"等尚存疑 小米SU7迎来"至暗时刻

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Xiaomi SU7's driver assistance system including AEB and sensor fusion AI) whose use during driving directly relates to a serious traffic accident causing harm (vehicle collision, potential injury, and significant financial loss). The AI system's detection and braking functions were active but apparently insufficient to prevent the accident, indicating a malfunction or limitation. The involvement of AI in the accident and the resulting harms (physical risk, economic loss) meet the criteria for an AI Incident. The article discusses the AI system's role in the accident and the uncertainty about AEB intervention, confirming AI's pivotal role in the harm.
Thumbnail Image

为何不联系家属?车门是否能打开?小米汽车回应六大质疑

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving and AEB safety features) that was in use at the time of a fatal accident causing loss of life. The AI system's operation, including its detection and response to obstacles, is directly relevant to the incident. The harm (death of three individuals) has occurred and is linked to the AI system's use and its limitations in responding to certain obstacles. Xiaomi's inability to currently analyze the vehicle post-accident does not negate the AI system's involvement. Hence, this is an AI Incident due to direct harm caused during AI system use.
Thumbnail Image

小米汽车事故家属再发声,鼓吹"智驾"谁之过?

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Xiaomi Pilot Pro, an L2 intelligent driving assistance system) that was active during the accident. The system detected obstacles, issued warnings, and began deceleration, but the driver had to take over and could not prevent the collision within a very short reaction window. The fatalities and vehicle fire constitute direct harm to persons and property. The incident highlights the risks of overreliance on AI-assisted driving systems that are not fully autonomous and the potential for misuse or misunderstanding of AI capabilities by users. Given the direct causal link between the AI system's operation and the fatal accident, this event meets the criteria for an AI Incident.
Thumbnail Image

交付量再创新高的小米SU7出事了!大家关注的焦点有两个

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (NoA) being active during the accident, issuing warnings and attempting to reduce speed, but the collision still occurred shortly after the driver took control. The harm is realized (fatal accident), and the AI system's role is pivotal in the chain of events leading to the harm. Additionally, the reported door lock failure after the crash suggests a malfunction related to the vehicle's AI or electronic systems, further supporting the classification as an AI Incident. The event is not merely a potential risk or a general update but a concrete incident with direct harm linked to AI system use and malfunction.
Thumbnail Image

小米SU7高速碰撞事故:是什么困住了3条生命?

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's NOA intelligent driving assistance system is an AI system actively involved in the vehicle's operation at the time of the crash. The event resulted in direct harm to human life (three deaths) and property (vehicle fire). The AI system's warnings, deceleration, and transition to manual control are described, indicating its operational role. The reported failure of the car doors to open after the collision, potentially linked to the AI or electric locking mechanisms, contributed to the harm. Given the direct causal link between the AI system's use and the fatal outcome, this event meets the criteria for an AI Incident.
Thumbnail Image

小米SU7爆燃,再遭信任考验

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle uses an AI-based assisted driving system (NOA), which was active during the accident. The system's inability to fully handle complex road conditions and the driver's overtrust in the system led to the crash, causing fatalities. Additionally, the electronic door lock system, dependent on the vehicle's power and electronic controls, failed to unlock after the crash, preventing escape and rescue, which is a direct harm linked to the AI-enabled vehicle systems. This constitutes an AI Incident because the AI system's use and malfunction directly led to injury and death (harm to persons). The article also discusses broader safety and regulatory issues but the core event is a realized harm caused by AI system use and failure.
Thumbnail Image

知名新能源车品牌高速爆燃事故舆情分析报告

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle uses an AI system (NOA) for autonomous driving assistance, which was active during the accident. The AI system issued warnings and attempted to reduce speed but was overridden by human control shortly before the collision. The accident caused fatalities and vehicle fire, indicating harm to persons and property. The AI system's malfunction or limitations in handling the situation contributed to the incident. Hence, this is an AI Incident as the AI system's use and malfunction directly or indirectly led to significant harm.
Thumbnail Image

小米紧急回应调查结果,又是把生命交给"智驾"惹的祸!

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA assisted driving system) whose use directly contributed to a fatal accident, causing harm to human life (harm category a). The AI system's limitations in handling complex road conditions and the driver's insufficient attention combined to cause the crash and fire. Xiaomi's rapid response does not negate the fact that the AI system's malfunction or insufficient capability was a contributing factor to the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to injury and death.
Thumbnail Image

小米SU7事故时疑"车门锁死"?业内人士:若解锁信号无法传输会出现

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NOA intelligent driving assistance) that was active before and during the accident. The system's warnings and control transitions are described, and the failure of electronic signals (likely AI-related control systems) to unlock the doors after the crash is a direct malfunction contributing to harm (inability to escape, fatalities). The harm (death of three people) has occurred, and the AI system's malfunction or use is a contributing factor. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小米回应SU7高速碰撞爆燃细节,港股股价跌超5%!近期刚发布"史上最强年报"

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Xiaomi's NOA intelligent assisted driving system, an AI system that was controlling the vehicle at the time of the accident. The system detected obstacles, issued warnings, and attempted to decelerate, but the driver took over shortly before the collision. The collision caused three deaths, which is a direct harm to persons. Therefore, this is an AI Incident because the AI system's use was directly linked to the fatal accident. The article also discusses the aftermath, including company responses and public concerns, but these do not change the classification.
Thumbnail Image

20万辆交付神话背后:小米汽车如何跨越安全"生死关"?

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (NOA intelligent assisted driving) that was in use at the time of a fatal crash, directly linking the AI system's operation and its limitations to harm (injury and death). The recall due to software issues and the acknowledged system bugs further confirm AI system malfunctions impacting safety. The event clearly meets the criteria for an AI Incident because the AI system's use and malfunction have directly led to harm to persons. The broader context of safety concerns and after-sales service issues supports the assessment but does not change the classification from AI Incident.
Thumbnail Image

小米SU7高速上碰撞爆燃致3人死亡,四大疑问待解

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The vehicle was operating with an AI-based intelligent driving assistance system (NOA) and active safety features including AEB, which are AI systems by definition. The accident involved the AI system's use and its failure or limitations in preventing the collision and subsequent harm. The fatalities and vehicle fire are direct harms linked to the AI system's operation and its interaction with the driver. The article discusses the AI system's detection, braking response, and safety features, indicating the AI system's pivotal role in the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

专家称小米Su7 AEB可能存在问题

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7's AEB system is an AI system designed to detect obstacles and prevent collisions. The article discusses a fatal accident where the AEB system's involvement is questioned, with experts noting possible limitations in obstacle detection and failure to trigger under certain conditions. This suggests the AI system's malfunction or non-intervention may have directly or indirectly contributed to the harm (deaths). Hence, this qualifies as an AI Incident due to injury and harm to persons caused or potentially caused by the AI system's failure or limitations.
Thumbnail Image

小米车主们,请注意安全第一

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the vehicle was in NOA intelligent assisted driving mode at the time of the crash, indicating the involvement of an AI system in the vehicle's operation. The collision with the highway barrier and subsequent fire caused the deaths of three individuals, which is a direct harm to human life. The timeline shows the AI system issued a risk warning seconds before the driver took over, but the accident still occurred, indicating a failure or limitation of the AI system's safety function. This direct link between the AI system's use and the fatal harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

小米SU7高速上碰撞爆燃致3人死亡,四大疑问待解

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system explicitly described as the NOA intelligent driving assistance and AEB safety systems in the Xiaomi SU7 vehicle. The accident caused direct harm (three deaths) and the AI system's performance and response are questioned as contributing factors. The AI system's use and possible malfunction or insufficient performance in emergency braking and obstacle detection directly relate to the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and death.
Thumbnail Image

打不开的车门,止不住的狂飙

2025-04-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of intelligent driving assistance in an electric vehicle. The AI's role in controlling the vehicle and the design of the car's safety features, including door mechanisms, are directly linked to the harm—fatalities due to inability to escape after a crash. The article explicitly connects the AI system's decision-making and vehicle design to the incident's outcome, constituting direct harm to human life. Therefore, this qualifies as an AI Incident because the AI system's use and related design issues have directly led to injury and death of persons.
Thumbnail Image

雷军不再沉默!小米官方解释车祸六大疑问

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article involves an AI system component (AEB) in the Xiaomi vehicle, which is an AI-based safety feature. However, there is no indication that the AI system caused or contributed to the accident or any harm. The company's statement denies having analyzed the vehicle yet and emphasizes ongoing investigation. The article mainly provides information about the company's response and investigation status, without reporting realized harm or plausible future harm directly linked to AI malfunction or misuse. Therefore, this is Complementary Information, as it updates on the investigation and company response related to an AI system but does not report an AI Incident or AI Hazard.
Thumbnail Image

青岛广电《正在新闻》对遇难者母亲的采访披露了比较多的信息

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves a fatal car accident where the driver was using an intelligent driving system (an AI system). The accident caused deaths, which is a direct harm to persons. The family's concern about the AI system's reliability and the accident's circumstances suggest the AI system's malfunction or misuse contributed to the harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

大家可以看下这个行车记录仪记录的视频...

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (Huawei intelligent driving) is reasonably inferred as it is mentioned in the context of the accident. However, the event does not describe a malfunction or misuse of the AI system leading to the accident or injury. The harm (injury to the driver) occurred, but the AI system's role is not established as causal or contributory. The narrative mainly serves as a cautionary commentary on the limits of AI driving assistance. Therefore, this is best classified as Complementary Information providing context and warnings about AI system use rather than an AI Incident or Hazard.
Thumbnail Image

果然财经|小米汽车爆燃事件,车主、车企谁之过?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent driving feature active during the accident. The AI system's use directly contributed to the harm: the vehicle was in AI-assisted driving mode when it crashed, leading to fatalities and vehicle destruction. The article discusses the system's warnings, the driver's takeover, and the challenges of reaction time, indicating the AI system's role in the chain of events causing harm. The harm includes injury/death (a) and harm to property (d). Thus, it meets the criteria for an AI Incident as the AI system's use directly led to significant harm.
Thumbnail Image

三个女孩丧命,雷总还可以相信吗

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based intelligent driving assistance system (NOA) that was active during the accident. The system's failure to timely detect the road closure and initiate emergency braking directly led to the collision and fatalities. The discussion of the AI system's capabilities, limitations, and the vehicle's safety design issues further supports the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident as the AI system's malfunction and use directly led to injury and death of persons.
Thumbnail Image

技术"狂飙"下的生命警示:全民智驾时代谁该为"教育真空"买单?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an L2 intelligent driving assistance system (NOA) that was active during the accident. The system's perception limitations and failure to trigger automatic emergency braking directly contributed to the collision and fatalities. Additionally, the article discusses the human-machine interaction issues and lack of consumer education, which indirectly exacerbated the harm. The harm is realized (fatalities), and the AI system's malfunction and use are pivotal factors. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

新能源汽车生死300秒,够逃出生天吗?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being active (NOA intelligent assisted driving) at the time of the crash. The crash and subsequent fire caused fatalities, which constitute harm to persons. The AI system's use and possible limitations in handling the situation contributed indirectly to the incident. Therefore, this qualifies as an AI Incident due to the direct link between AI system use and realized harm (fatalities and fire). The article also discusses broader safety challenges and standards but the primary focus is on the incident and its consequences.
Thumbnail Image

留给小米解释的时间不多了

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA intelligent driving assistance system) that was in use during the accident. The system's late warning and handover to the driver with insufficient reaction time directly contributed to the fatal collision, causing injury and death (harm to persons). The AI system's malfunction or design limitations are central to the incident. This meets the criteria for an AI Incident because the AI system's use and malfunction directly led to harm (fatalities). The detailed discussion of the system's warnings, driver reaction times, and accident circumstances supports this classification.
Thumbnail Image

小米汽车致命事故:人类和智驾,最终谁驯服了谁?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Navigate on Autopilot (NOA) feature in the Xiaomi SU7 vehicle, which is an AI-based intelligent driving assistance system. The accident resulted in the deaths of the driver and two passengers, constituting injury and harm to persons. The AI system was active and issued warnings shortly before the crash, but the driver had insufficient time to react, indicating a malfunction or limitation in the AI system's ability to prevent the accident. The article also highlights the human-machine interaction issues and the system's design constraints. Given the direct causal link between the AI system's use and the fatal harm, this event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

小米SU7碰撞起火致3人身亡,谁该担责?

2025-04-02
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under the NOA intelligent assisted driving mode, an AI system that controls vehicle speed and steering assistance. The accident occurred while the system was active, and the driver took over control shortly before the collision. The AI system detected obstacles and issued warnings and deceleration commands, but the collision still happened, resulting in three fatalities. This clearly constitutes direct harm to persons caused by the use of an AI system. The event is not merely a potential hazard or complementary information but a realized incident involving AI malfunction or limitations contributing to fatal harm. Hence, it is classified as an AI Incident.
Thumbnail Image

致命两秒:小米SU7碰撞爆燃事故全复盘 | 商业头条No.68

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent driving assistance (NOA) and AEB system in the Xiaomi SU7 vehicle. The AI system's failure to trigger emergency braking in response to a static obstacle and the short reaction time for human takeover directly led to a fatal collision and fire causing multiple deaths. This is a clear case where the AI system's malfunction and use have directly led to injury and harm to persons, fulfilling the criteria for an AI Incident. The detailed analysis of the AI system's capabilities, limitations, and the accident's circumstances supports this classification. The event is not merely a potential hazard or complementary information but a realized harm caused by AI system involvement.
Thumbnail Image

小米通报SU7高速事故,股价一度跌超5%,专家称责任认定存在复杂性

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating in an AI-assisted driving mode (NOA), which is an AI system that controls steering and speed. The accident occurred while the AI system was active, and despite warnings and driver intervention, the vehicle collided with a barrier. This indicates the AI system's role in the incident, either through malfunction or limitations, contributing to harm. The event involves direct harm (a traffic accident) linked to the AI system's use, qualifying it as an AI Incident. The discussion about responsibility complexity and the system's immaturity further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

生死300秒:新能源汽车碰撞起火黄金逃生期是否够用?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA intelligent assisted driving active before the crash. The accident caused fatalities and vehicle fire, fulfilling the harm criteria (a) injury or harm to persons and (d) harm to property. The AI system's use contributed indirectly to the incident, as the vehicle was under AI-assisted control at the time. The article also discusses safety standards and challenges related to electric vehicle battery fires, which are relevant to the incident's context but do not negate the AI system's involvement. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

智驾响应是否及时?车辆为何燃烧? 车门是否锁死?小米SU7致三死车祸三问

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the intelligent assisted driving system (NOA) active during the accident. The system detected obstacles and issued warnings but only 2-3 seconds before collision, which was insufficient for the driver to respond effectively. The accident caused three deaths and a vehicle fire, constituting injury/harm to persons and harm to property. The AI system's role in the chain of events leading to the harm is direct and significant, as the system's timing and driver monitoring were factors in the crash. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

业内谈小米SU7事故车门疑似无法打开

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 was operating under NOA intelligent assisted driving, an AI system, at the time of the accident. The AI system detected obstacles and attempted to slow down, but the vehicle still collided with a barrier. Post-collision, the electronic door locking system, which relies on motorized mechanisms controlled by the vehicle's electronics (likely integrated with AI systems), failed to unlock, trapping occupants and raising safety concerns. This failure is linked to the AI system's malfunction or power loss after the crash. The incident caused direct harm risks to occupants (injury, fire hazard, inability to exit), fulfilling the criteria for an AI Incident where AI system use and malfunction directly led to harm or risk thereof.
Thumbnail Image

新能源汽车生死300秒,够逃出生天吗?

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as active (NOA intelligent assisted driving) during the fatal crash. The crash and subsequent fire caused direct harm (three deaths). The AI system's use is linked to the incident, as it was engaged before the driver took control and the collision occurred. The article also discusses safety standards and challenges related to electric vehicle battery fires, but the core event is a fatal crash involving an AI system. Hence, it meets the criteria for an AI Incident due to direct harm caused with AI system involvement.
Thumbnail Image

小米汽车事故5大疑问待解,雷军深夜发声:"我必须站出来,代表小米承诺!

2025-04-02
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (NOA intelligent driving assistance) during the accident. The AI system's operation and its interaction with the driver are central to the event. The accident caused direct harm (three fatalities), which is a clear AI Incident as per the framework. The discussion of system limitations, driver reaction time, and safety features further supports the classification as an AI Incident rather than a hazard or complementary information. The company's official statements and ongoing investigation do not negate the realized harm caused by the AI system's involvement.
Thumbnail Image

小米事故逝者家属竟遭网暴 真相远比情绪重要

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (assisted/autonomous driving) as part of the vehicle technology involved in a fatal accident. However, the article does not confirm that the AI system malfunctioned or directly caused the harm; the investigation is still underway. The harms (fatalities) have occurred, but the AI system's role is not yet established as causal. Therefore, this is not an AI Incident but an AI Hazard, as the AI system's involvement could plausibly lead to harm or has potential safety implications pending investigation.
Thumbnail Image

小米SU7汽车事故,要害是智驾虚假宣传与监管缺失

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the NOA intelligent driving assistance system) whose malfunction—specifically its inability to detect static obstacles—directly led to a fatal car crash causing multiple deaths. The article details how the AI system's design flaws and misleading marketing contributed to user overreliance, and how regulatory gaps failed to prevent deployment of an unsafe system. This meets the criteria for an AI Incident because the AI system's malfunction and use directly caused harm to persons (fatalities), fulfilling the definition of an AI Incident under harm category (a).
Thumbnail Image

小米SU7爆燃事故,小米方公布的细节让人惊讶,却也让人担心

2025-04-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of advanced vehicle electronic systems that record detailed driving data, likely including AI components for autonomous or assisted driving. However, the accident's cause is not yet determined, and there is no evidence that AI system malfunction, misuse, or failure directly or indirectly caused the fatalities. The article expresses concerns about possible future risks of AI system failures or hacking but does not document an actual AI Incident. Therefore, this event is best classified as an AI Hazard, as the AI system's involvement could plausibly lead to harm in the future, but no confirmed AI-related harm has occurred yet.
Thumbnail Image

小米SU7高速起火之后有车主称曾在同路段发生事故 2025-04-02 17:06

2025-04-02
sznews.com
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was using the NOA autonomous driving feature when the accident occurred. The AI system's failure to recognize or alert the driver about the construction zone and the insufficient road warnings contributed to the accident. The incident resulted in serious harm, including a fatal fire. The AI system's involvement in the use phase and its malfunction or inadequacy in hazard detection directly or indirectly led to harm, meeting the criteria for an AI Incident. The report also discusses systemic issues with road signage and navigation alerts, reinforcing the AI system's role in the harm.
Thumbnail Image

su7的事故,你们能不能看到重点

2025-04-02
club.autohome.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system component (assisted driving) that indirectly contributed to an accident due to its inability to handle certain road conditions. The discussion includes plausible indirect harm (accident and injury) linked to the AI system's use. Since the accident has occurred and harm is implied, this qualifies as an AI Incident. The text does not describe a new hazard or complementary information but focuses on the incident and its analysis.
Thumbnail Image

小米汽车遭遇创立以来最严峻的信任危机 - cnBeta.COM 移动版

2025-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, namely the intelligent driving (autonomous driving) system in the Xiaomi vehicle. The system's use is directly linked to a fatal accident causing injury and death, which is a clear harm to persons. The discussion of the system's behavior, limitations, and the accident's circumstances shows that the AI system's malfunction or misuse contributed to the harm. The article also references similar past incidents involving AI driving systems, reinforcing the classification. Hence, this is an AI Incident due to realized harm caused directly or indirectly by the AI system's use.
Thumbnail Image

智驾是场"不作不死"的游戏?

2025-04-03
caifuhao.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an intelligent driving system (L2 level autonomous driving), which was in use at the time of a fatal accident. The AI system's limitations and possible malfunction or misjudgment contributed indirectly to the harm (fatalities). The article also discusses systemic issues such as misleading marketing, lack of regulatory oversight, and safety risks inherent in current AI driving assistance technologies. Since the harm (death) has occurred and the AI system's involvement is a contributing factor, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

为何不联系家属?小米集团回应:仍在等待会面通知 - cnBeta.COM 移动版

2025-04-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving and AEB safety features) that was active during the accident. The AI system's operation and limitations (e.g., not responding to certain obstacles) played a role in the collision and resulting harm (vehicle damage and fire). The article reports on a real accident with harm caused, directly linked to the AI system's use and performance. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (property damage and potential injury).
Thumbnail Image

小米高速爆燃事故致死三人 业内称其智驾水平不在第一梯队 - cnBeta.COM 移动版

2025-04-02
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Xiaomi's intelligent driving system) in the development and use phases. The system was active during the accident and failed to prevent the collision, which directly led to fatalities, constituting injury or harm to persons. The AI system's role is pivotal as the accident occurred while the vehicle was under AI-assisted driving, and the system's limitations in obstacle detection and emergency braking are highlighted as contributing factors. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

小米回应车祸致3死事件6大质疑:为何不联系家属、车会起火、车门是否能打开 - cnBeta.COM 移动版

2025-04-02
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NOA intelligent assisted driving and AEB active safety features) whose use directly led to a fatal car accident causing three deaths and a vehicle fire. The AI system's operation and limitations are described, and the harm (fatalities) is realized. Therefore, this qualifies as an AI Incident due to direct harm to persons caused by the AI system's use and malfunction or limitations in its operation.
Thumbnail Image

小米SU7车祸案的背后:那些被拿掉的激光雷达 - cnBeta.COM 移动版

2025-04-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the Xiaomi Pilot Pro autonomous driving system operating in NOA mode. The system's use directly contributed to the fatal crash, as the vehicle was in AI-assisted driving mode and failed to avoid the obstacle in time. The harm is clear: multiple fatalities resulted from the accident. The article also provides context on the limitations of pure vision AI systems without lidar, which is relevant to the incident. Therefore, this is an AI Incident due to the direct causal link between the AI system's operation and the harm (fatalities).
Thumbnail Image

雷军最担心的事情,还是发生了 - cnBeta.COM 移动版

2025-04-02
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the NOA intelligent assisted driving system in the Xiaomi SU7 vehicle. The system's detection and response to obstacles, and its failure to prevent the collision, directly led to the deaths of three individuals, which is a clear harm to people. The article also details prior malfunctions and failures of the AI driving system, reinforcing the role of AI system malfunction in causing harm. The harm is realized and significant, meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a concrete incident with direct harm caused by AI system use and malfunction.
Thumbnail Image

雷军是该给小米汽车,降降温了-钛媒体官方网站

2025-04-02
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (intelligent driving/autonomous driving system) in use at the time of a fatal traffic accident causing multiple deaths. The AI system's malfunction or failure to prevent the accident is directly linked to harm to persons, fulfilling the criteria for an AI Incident. The article also discusses the company's response and public reaction but the primary focus is the incident itself and its consequences. Therefore, the classification is AI Incident.
Thumbnail Image

小米公布SU7高速上碰撞爆燃事件细节;紫光展锐完成股改|数智早参 2025-04-02 07:31

2025-04-01
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 vehicle was operating under an AI-based intelligent driving assistance system (NOA) at the time of the accident. The AI system detected obstacles and attempted to reduce speed, but the vehicle still collided with a concrete barrier, resulting in fatalities. This constitutes an AI Incident because the AI system's use and its limitations directly contributed to a fatal harm (injury and death of persons). The other parts of the article about company share reform and AI industry policy are unrelated to any specific AI harm or hazard. Therefore, the primary classification is AI Incident based on the Xiaomi SU7 crash.
Thumbnail Image

小米汽车高速悲剧背后:2月刚完成全量推送"无图端到端"智驾-证券之星

2025-04-02
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the autonomous driving system of the Xiaomi SU7 vehicle. The AI system's use and limitations directly contributed to a serious traffic accident on a highway, causing harm to people (likely injury or death). The article details how the AI system's failure to detect road obstacles and adapt to construction zones led to the crash. This fits the definition of an AI Incident because the AI system's malfunction or design limitations directly led to harm. The article does not merely discuss potential risks or future hazards but reports on an actual accident with ongoing investigation and harm. Therefore, the classification is AI Incident.
Thumbnail Image

从"炫技"到"夺命"!小米汽车的神话与陷阱

2025-04-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the NOA (Navigate on Autopilot) driving assistance system active during the accident. The system's failure to detect road conditions and obstacles, combined with other technical failures (electronic door lock failure, battery fire), directly contributed to the fatal incident. This constitutes an AI Incident because the AI system's malfunction and use have directly led to injury and death (harm to persons). The article also discusses misleading marketing and insufficient safety measures, reinforcing the link between AI system use and harm. Therefore, the classification is AI Incident.
Thumbnail Image

当科技狂奔遇上生命托付

2025-04-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as NOA intelligent assisted driving, which is an AI system providing real-time driving assistance. The system detected obstacles and issued warnings but did not prevent the collision, and the driver took over but could not avoid the crash. The AI system's failure or limitations indirectly led to the fatal injuries and deaths, fulfilling the criteria for an AI Incident. The article also discusses systemic safety issues related to AI-assisted driving and battery safety, reinforcing the connection to AI system use and harm. Hence, the classification is AI Incident.
Thumbnail Image

小米汽车进入创立以来最严峻的信任危机

2025-04-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the intelligent driving/autonomous driving system in the Xiaomi SU7 vehicle) whose use directly led to a fatal accident causing loss of life, which is a clear harm to persons. The article details the accident circumstances, the AI system's role, and the resulting trust crisis, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction or limitations contributed to the incident. Therefore, the classification is AI Incident.
Thumbnail Image

小米SU7 Ultra首个OTA升级:升级能量管理和超充桩体验

2025-04-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The Xiaomi SU7 Ultra includes AI systems for intelligent driving and energy management. The OTA update optimizes these AI-driven functions, which are integral to vehicle operation and safety. However, the article does not report any harm or incidents resulting from these AI systems or their updates, nor does it indicate any plausible risk of harm. Therefore, this event is best classified as Complementary Information, providing context on AI system improvements and ongoing development without describing an incident or hazard.
Thumbnail Image

小米SU7 Ultra首个OTA升级:升级能量管理和超充桩体验

2025-04-04
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the intelligent driving and energy management systems in the vehicle) and its use (software update). However, there is no indication of any realized harm or plausible risk of harm from this update. The article focuses on improvements and user experience enhancements, not on incidents or hazards. Therefore, this is best classified as Complementary Information, providing context and updates on AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

小米SU7事故思考:常存敬畏之心,避免营销过度

2025-04-07
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Xiaomi SU7's intelligent driving assistance system with NOA and AEB functions) whose malfunction and limitations directly contributed to a fatal accident causing loss of life. The AI system failed to detect and respond appropriately to static obstacles, and the driver’s overreliance on the system led to delayed intervention. These factors caused harm to persons, fulfilling the criteria for an AI Incident. The article provides detailed evidence of the AI system's role in the accident and the resulting harm, not merely potential or future risk, thus excluding classification as a hazard or complementary information.