Food Delivery Algorithms Linked to Rider Injuries and Unsafe Working Conditions in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese food delivery platforms Meituan and Ele.me use AI-driven algorithms to optimize delivery times, pressuring riders to speed and violate traffic rules. This has led to frequent accidents, injuries, and even deaths among riders. Public backlash prompted minor platform adjustments, but core algorithmic pressures remain, perpetuating unsafe working conditions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the delivery platform's algorithm) that optimizes delivery times and order assignments. This algorithm's use has directly and indirectly led to harm: increased traffic accidents and unsafe behavior by delivery riders, which constitutes injury or harm to persons (harm category a). Additionally, the algorithm's role in labor exploitation and pressure on riders relates to violations of labor rights (harm category c). The article provides evidence of realized harm, not just potential risk, making this an AI Incident rather than a hazard or complementary information. The detailed discussion of the algorithm's impact on rider safety and working conditions confirms the AI system's pivotal role in causing these harms.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Logistics, wholesale, and retailFood and beverages

Affected stakeholders
Workers

Harm types
Physical (injury)Physical (death)

Severity
AI incident

Business function:
Logistics

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

2020-09-09
雪球
Why's our monitor labelling this an incident or hazard?
The article discusses the use and optimization of an AI-based dispatch system that schedules delivery riders and allocates buffer time. However, it does not report any actual harm or incident caused by the AI system, nor does it indicate a plausible future harm. Instead, it focuses on the company's response and improvements to the system to better support riders and enhance safety. Therefore, this is complementary information about ongoing improvements and responses related to an AI system, not an incident or hazard.
Thumbnail Image

Q论|87%的网友愿意多等外卖10分钟,外卖骑手:平台算法有Bug!

2020-09-09
腾讯科技
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the delivery platform's algorithm) that optimizes delivery times and order assignments. This algorithm's use has directly and indirectly led to harm: increased traffic accidents and unsafe behavior by delivery riders, which constitutes injury or harm to persons (harm category a). Additionally, the algorithm's role in labor exploitation and pressure on riders relates to violations of labor rights (harm category c). The article provides evidence of realized harm, not just potential risk, making this an AI Incident rather than a hazard or complementary information. The detailed discussion of the algorithm's impact on rider safety and working conditions confirms the AI system's pivotal role in causing these harms.
Thumbnail Image

2020-09-10
新华网
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based dispatch and scheduling systems that enforce strict delivery time targets, which have led to unsafe riding practices and documented injuries and fatalities among delivery riders. The AI system's role in setting these time constraints and penalties for delays directly contributes to the harm. The platforms' responses acknowledge the AI system's impact and attempt to mitigate harm. This fits the definition of an AI Incident because the AI system's use has directly led to injury and harm to a group of people (delivery riders).
Thumbnail Image

外卖骑手,不止受困于系统

2020-09-11
新华网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (delivery platform algorithms optimizing routes and timing) and discusses harms related to their use (pressure and risk to delivery riders, societal impacts). However, it does not describe a specific event or incident where the AI system directly or indirectly caused a harm or malfunction. Instead, it reflects on systemic issues and future technological developments. This fits the definition of Complementary Information, as it provides supporting context and analysis about AI's role and societal implications without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

保障工程建设质量不能靠副市长"来看看"

2020-09-11
新华网
Why's our monitor labelling this an incident or hazard?
The article references platform algorithms managing delivery riders, which implies AI system involvement. The discussion highlights concerns about the algorithmic scheduling causing pressure and safety risks for riders, but no actual harm event is reported. The platform's introduction of a feature to let customers choose to wait longer is described as insufficient and more of a symbolic gesture. There is no direct or indirect harm caused by AI systems reported, nor a clear plausible future harm scenario. Other topics in the article are unrelated to AI. Thus, the article provides complementary information about AI system impacts and societal/governance responses rather than reporting an AI Incident or Hazard.
Thumbnail Image

2020-09-10
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the delivery platform's algorithmic scheduling and performance management system) that controls delivery workers' tasks and timing, causing them to take dangerous risks on the road. This system's use has directly led to increased traffic accidents and health risks for workers, fulfilling the criteria for harm to health (a) under AI Incident. The harm is realized and ongoing, not just potential. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

外卖平台压缩骑手配送时间?饿了么、美团回应将改进

2020-09-09
新华网
Why's our monitor labelling this an incident or hazard?
The platforms' dispatch and scheduling systems likely use AI or algorithmic decision-making to optimize delivery times and assignments. The compression of delivery times by these AI systems has indirectly led to harm by pressuring riders to violate traffic rules and increasing their risk, which constitutes harm to health and safety (a). The platforms' responses to adjust these systems indicate recognition of the AI system's role in causing these harms. Therefore, this event qualifies as an AI Incident due to the AI system's use leading to realized harm to workers' health and safety.
Thumbnail Image

2020-09-09
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the role of delivery system algorithms (AI systems) in creating stressful and hazardous working conditions for delivery riders, which has led to increased traffic accidents (harm to health). This harm is directly linked to the use of AI-driven scheduling and performance systems. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to harm to people (delivery riders).
Thumbnail Image

2020-09-11
新华网
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from references to platform dispatch and scheduling algorithms affecting delivery times and rider workload. The article focuses on responses to existing social concerns and labor issues rather than reporting an incident or hazard caused by AI. There is no specific harm directly or indirectly linked to AI system malfunction or misuse described. The content aligns with Complementary Information as it updates on societal and governance responses, platform measures, and expert commentary related to AI-driven delivery systems and labor conditions.
Thumbnail Image

2020-09-11
新华网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by delivery platforms to schedule and manage delivery times, which indirectly affect the health and safety of delivery riders by imposing tight time constraints. Although harm is implied and concerns about rider safety are raised, the article does not report a concrete incident of injury or harm caused by the AI system. Instead, it discusses the potential for harm and the platforms' responses to mitigate these risks. Therefore, this qualifies as Complementary Information, providing context and updates on AI system impacts and governance rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

2020-09-11
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (algorithmic routing and delivery time estimation systems) used by food delivery platforms. However, it does not describe any realized harm such as injury, rights violations, or disruption caused by these AI systems. Instead, it discusses the pressures and challenges faced by delivery riders due to the design and use of these systems, and the platforms' responses to these concerns. There is no indication that the AI systems malfunctioned or caused harm, nor that there is a credible risk of future harm. The article primarily provides contextual and response information about AI use in delivery platforms, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

美团外卖:将更好优化系统,给骑手留出8分钟弹性时间

2020-09-09
3w.huanqiu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI-based dispatch system that schedules delivery riders and is being optimized to improve safety and flexibility. However, the article does not report any actual harm caused by the AI system, nor does it describe a specific incident or malfunction leading to harm. Instead, it details planned improvements and responses to public concerns, which fits the definition of Complementary Information as it provides updates and governance responses related to AI system use and safety.
Thumbnail Image

光明时评:消费者不忍外卖骑手被困算法只是“君子责庖厨”

2020-09-10
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms used to manage delivery riders, which qualifies as AI systems under the definition since these algorithms infer from input data to optimize rider dispatch and performance. However, the article does not describe any direct or indirect harm caused by these AI systems, such as injury, rights violations, or other harms. Nor does it describe a credible risk of future harm from these systems. Instead, it focuses on public discourse, consumer perceptions, and platform commitments to improve algorithmic management. Therefore, this is best classified as Complementary Information, providing context and societal response to AI system use rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

光明时评:困在系统里的外卖骑手,需要的不只是多等五分钟

2020-09-09
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an algorithmic system that optimizes delivery times, which can be reasonably inferred as an AI system due to its data-driven, performance-evaluation nature. The system's use leads to real harm: increased accidents and fatalities among delivery riders, which is harm to health and safety (a). The system's malfunction or design failure to account for safety risks and its pressure on riders to engage in dangerous behaviors directly contributes to these harms. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

谁给骑手当"骑士"

2020-09-10
人民网
Why's our monitor labelling this an incident or hazard?
The delivery platform uses AI algorithms to optimize delivery routes and times, which directly leads to riders being pressured to deliver faster, sometimes dangerously (e.g., encouraged to take risky routes like going against traffic). This use of AI in the system has caused real harm to the riders' health and well-being, as described by the article's depiction of their exhaustion, stress, and risk-taking. The harm is directly linked to the AI system's use and its operational decisions, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美团将给骑手留出8分钟弹性时间 解决问题不找借口

2020-09-09
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meituan's dispatch and delivery optimization system) used in managing delivery riders. However, the article primarily reports on the company's measures to optimize the system and improve rider safety and working conditions, which are responses to previously raised concerns. There is no report of actual harm caused by the AI system, nor a credible risk of future harm described. Therefore, this is not an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides updates on societal and governance responses to AI system impacts in the delivery sector.
Thumbnail Image

上海市消保委评价饿了么多等五分钟 逻辑上有问题

2020-09-09
中关村在线
Why's our monitor labelling this an incident or hazard?
The article discusses the Shanghai Consumer Protection Committee's evaluation of a feature introduced by the food delivery platform Ele.me, which involves an AI system managing delivery logistics. The critique centers on the logic of making consumers bear the consequences of delivery rider faults, which is a governance and ethical issue rather than a direct or indirect AI incident or hazard. There is no report of injury, rights violation, or other harms caused by the AI system, nor a credible risk of such harm. Therefore, this is best classified as Complementary Information, as it provides societal/governance response to an AI-related platform's policy.
Thumbnail Image

最前线 | 美团:调度系统会给骑手留出8分钟弹性时间,恶劣天气可停止接单_详细解读_最新资讯_热点事件_36氪

2020-09-09
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The dispatch system is an AI system involved in scheduling and routing delivery riders. The article discusses the system's use and the company's efforts to improve it to prevent harm such as rider safety risks and unfair assessments. However, no concrete harm or incident is described as having occurred due to the AI system. The focus is on the company's response and planned improvements, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

上海消保委评饿了么多等5分钟 不应让消费者承担责任

2020-09-09
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves an AI-enabled platform (Ele.me) that uses algorithmic dispatch and delivery management, which can be reasonably inferred as AI systems. However, the article does not report any injury, rights violation, or other harms caused by the AI system's malfunction or use. Instead, it discusses a new feature and the social debate around responsibility allocation between the platform, riders, and consumers. This fits the definition of Complementary Information, as it provides context and societal response to AI system use in food delivery platforms without describing a specific AI Incident or AI Hazard.
Thumbnail Image

饿了么将推出多等5分钟功能 被质疑将责任转嫁消费者

2020-09-09
中关村在线
Why's our monitor labelling this an incident or hazard?
The article involves an AI system implicitly through the delivery time algorithm that sets strict time limits for riders, which contributes indirectly to labor and safety concerns. However, no direct or indirect harm caused by the AI system's malfunction or misuse is reported as having occurred. The feature introduced is a response to public pressure and aims to mitigate existing tensions but does not itself represent a harm or a plausible future harm scenario. The focus is on societal and governance responses to AI-driven labor management issues, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

饿了么再次回应骑手政策变革 希望把选择权给用户

2020-09-10
中关村在线
Why's our monitor labelling this an incident or hazard?
Although Ele.me likely uses AI systems for delivery logistics and rider management, the article does not describe any realized harm or incident caused by AI. The policy change and feature are still in planning or testing stages, and no direct or indirect harm has occurred. The article mainly provides an update on the company's response to criticism and future plans, which fits the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

人大教授:平台没把外卖小哥当人

2020-09-10
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (algorithms and data-driven dispatch systems) used to manage delivery workers. The use of these AI systems has indirectly led to harm to the health and safety of delivery workers, as they are pressured to follow routes and schedules that may be dangerous, making their job high-risk. This fits the definition of an AI Incident because the AI system's use has directly or indirectly caused harm to a group of people. The article also mentions platform responses, but the primary focus is on the harm caused by the AI-driven management system.
Thumbnail Image

消失的两分钟:美团验证了马斯克的“矩阵模拟”预...

2020-09-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI systems (algorithms, machine learning models, and patented technologies) used by Meituan to manage delivery logistics and rider behavior. These AI systems influence delivery time estimates, workload, and penalties, which have resulted in increased pressure, health risks (e.g., accidents in bad weather), and labor exploitation of delivery riders. The harm is direct and materialized, fulfilling the criteria for an AI Incident under the OECD framework, as it involves injury or harm to a group of people and violations of labor rights caused by the AI system's use.
Thumbnail Image

上海每2.5天就有1名外卖员伤亡 饿了没美团相继"减速"

2020-09-11
中关村在线
Why's our monitor labelling this an incident or hazard?
The article centers on the harm to delivery riders caused by the operational systems of delivery platforms, which rely on AI algorithms for order assignment and time optimization. The pressure from these AI-driven systems to deliver quickly has led to frequent injuries and deaths, a direct harm to people. The platforms' subsequent adjustments to delivery time allowances and reward structures indicate recognition of the AI system's role in these harms. Thus, the event meets the criteria for an AI Incident due to the indirect causation of physical harm through AI system use.
Thumbnail Image

饿了么推出多等5分钟新功能,刚刚,美团也回应了!

2020-09-09
焦点房地产网(FOCUS.cn)
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems in the form of delivery platform algorithms that schedule and optimize delivery times. The article documents that these algorithms' use has directly contributed to increased traffic accidents and safety hazards for delivery riders, which is harm to health and safety. The platforms' responses aim to mitigate these harms but do not negate the fact that harm has occurred. Therefore, this is an AI Incident due to the realized harm caused by the AI systems' use in delivery time management and rider scheduling.
Thumbnail Image

美团将给骑手留出8分钟弹性时间;信小呆一元转让中国锦鲤活动取消;微信小程序支持分享到朋友圈

2020-09-10
chinaz.com
Why's our monitor labelling this an incident or hazard?
The Meituan dispatch system is an AI system managing delivery times and rider safety. The article reports on public concerns about rider safety and Meituan's response including system adjustments and new safety tech development. There is no report of actual harm caused by the AI system, only potential risks and mitigation efforts. The other topics do not involve AI harms. Thus, the article fits the definition of Complementary Information, as it updates on societal and technical responses to AI system use and safety concerns without describing a new AI incident or hazard.
Thumbnail Image

外卖小哥拼命,谁“饿”了?又“美”了谁?

2020-09-10
m.top.cnr.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an 'intelligent delivery system' algorithm that sets delivery times and enforces penalties on delivery workers, which is an AI system influencing working conditions. The harm is indirect but real, as the system's use leads to stressful, high-risk work environments and economic harm to the workers. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (the delivery workers).
Thumbnail Image

美团回应骑手问题:恶劣天气将延长骑手配送时间

2020-09-10
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meituan's delivery dispatch and routing system) whose use has raised concerns about rider safety and delivery pressures. However, the article does not report any actual harm or incident caused by the AI system; rather, it details the company's acknowledgment of issues and planned improvements to prevent harm. This fits the definition of Complementary Information, as it provides updates and responses related to AI system impacts without describing a new AI Incident or AI Hazard.
Thumbnail Image

感谢大家的意见和关心,我们马上行动

2020-09-09
雪球
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the platform's dispatch system and algorithms used to optimize delivery timing and safety, indicating the involvement of AI systems. However, it does not report any realized harm or incidents caused by the AI system. Instead, it outlines measures to improve and mitigate potential issues, reflecting a proactive governance and improvement approach. Therefore, this is Complementary Information providing updates and responses related to AI system use and its impact on delivery riders, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

饿了么称希望把选择权交给用户 平台无法判断用户是否着急

2020-09-10
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an algorithm managing delivery times and rider schedules. The controversy and concerns relate to the potential for harm (e.g., rider safety risks) due to the system's design and operational decisions. However, the feature allowing users to extend wait times is not yet implemented, and no direct or indirect harm has been reported as occurring. Therefore, this situation represents a plausible risk or concern about AI system use rather than an actual incident. The article mainly provides context, public reaction, and regulatory commentary on the AI system's impact and proposed feature, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

饿了么将推出多等5分钟功能,为优秀骑士提供鼓励机制

2020-09-09
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system implicitly, as the delivery platform likely uses AI-driven order management and rider assignment systems to optimize delivery times and performance. However, the article does not describe any harm or potential harm caused by the AI system, nor does it indicate any incident or hazard related to AI malfunction or misuse. Instead, it reports a new feature and incentive mechanism aimed at improving rider conditions, which is a positive development and does not constitute an incident or hazard. Therefore, this is best classified as Complementary Information, providing context on societal and governance responses to AI-driven delivery systems.
Thumbnail Image

美团回应外卖骑手事件 称系统会给骑手留出8分钟弹性时间

2020-09-09
chinaz.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI-based dispatch system and safety technology under development, indicating AI system involvement. However, it does not describe any realized harm or direct/indirect incident caused by the AI system. Nor does it present a credible imminent risk of harm. The main content is about the company's response and planned improvements, fitting the definition of Complementary Information, which provides context and updates without reporting new harm or hazards.
Thumbnail Image

美团回应骑手事件:调度系统会给骑手留出8分钟弹性时间 - cnBeta.COM 移动版

2020-09-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The dispatch system uses algorithms to schedule deliveries and allocate time buffers, which implies the involvement of an AI system or advanced algorithmic decision-making. The article does not report any realized harm or incident caused by the AI system but rather outlines measures to improve safety and rider welfare. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information providing updates on responses and improvements related to an AI system.
Thumbnail Image

被困在事故里的外卖骑手:配送中遭遇意外是不是工伤、谁来负责? - cnBeta.COM 移动版

2020-09-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly references the role of algorithmic systems in managing delivery riders but does not describe any direct or indirect harm caused by AI system malfunction or misuse. The harms discussed (injuries from traffic accidents, insurance disputes, labor rights issues) are linked to the platform's labor and insurance practices rather than AI system failures or misuse. The algorithmic dispatch system is mentioned as a background factor influencing rider behavior and risk exposure but not as the direct cause of harm. The article provides legal and social context on how AI-driven platform labor models affect workers' rights and responsibilities, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

美团外卖宣布五项举措改进配送系统_ 东方财富网 - 东方财富网

2020-09-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article discusses the use and improvement of AI-driven dispatch and safety systems within Meituan's delivery platform. However, it does not report any actual harm or incidents caused by these AI systems. Instead, it focuses on planned or ongoing improvements to prevent harm and enhance safety and fairness. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about governance and system optimization in response to prior concerns.
Thumbnail Image

外卖平台甩的锅 善解人意的消费者不接 - cnBeta.COM 移动版

2020-09-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The delivery platform uses an AI system to optimize delivery times, which has led to increasingly unrealistic and unsafe delivery schedules. This system's use has directly caused harm by pressuring delivery workers into dangerous behaviors, resulting in more traffic accidents. The article highlights the causal link between the AI system's scheduling and the harm to workers, fulfilling the criteria for an AI Incident. The platform's mitigation attempt (adding a 'willing to wait longer' button) does not negate the existing harm but rather shifts blame, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

一个"被困在系统里的美团骑手":生活所迫想多挣点钱 - cnBeta.COM 移动版

2020-09-08
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a system that automatically dispatches orders and enforces strict delivery time limits, with penalties for late deliveries. This system is AI-driven or algorithmic in nature, as it manages order allocation and timing constraints dynamically. The pressure from this system causes riders to engage in risky behavior, such as traffic violations, leading to accidents and injuries. The harm (injury and death of riders) is directly linked to the AI system's use in managing delivery times and penalties. Hence, this qualifies as an AI Incident due to indirect harm to health caused by the AI system's operational use.
Thumbnail Image

人大教授:平台没把外卖小哥当人 - cnBeta.COM 移动版

2020-09-10
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of algorithms used to manage and schedule delivery riders. The use of these algorithms has indirectly led to harm by imposing unsafe routes and disregarding rider welfare, which can be considered harm to persons (injury or harm to health). The platforms' responses indicate recognition of these harms and attempts at mitigation. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to people (riders).
Thumbnail Image

外卖骑手的困境,不只是算法与新职业要解决的难题 - cnBeta.COM 移动版

2020-09-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article clearly describes how AI-driven delivery time algorithms and order allocation systems impose strict time constraints on delivery riders, leading to physical and economic harm. The riders face high-risk working conditions exacerbated by algorithmic pressure to deliver quickly, sometimes at the cost of their safety and income. The platform's algorithmic management is a direct factor in these harms. Although the article also discusses responses and potential improvements, the primary focus is on the realized harms caused by AI system use in the delivery ecosystem. Hence, this is an AI Incident due to indirect harm to health and labor rights caused by AI system use.
Thumbnail Image

饿了么:5分钟;美团:8分钟......谁能化解外卖竞速之困? - cnBeta.COM 移动版

2020-09-10
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (dispatch and routing algorithms) used by food delivery platforms that directly contribute to unsafe working conditions for delivery riders, resulting in injuries and deaths. The AI system's use in scheduling and time pressure is a direct factor causing harm to people (riders), fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential. The article also discusses the platforms' responses and societal reactions, but the core event is the harm caused by AI system use in delivery operations.
Thumbnail Image

困在系统里的,到底是谁? - 电子商务 - cnBeta.COM

2020-09-09
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses intelligent dispatch and delivery optimization systems used by food delivery platforms like Meituan and Ele.me. The systemic pressures and operational challenges described stem from the use and design of these AI systems. However, the article does not report a concrete incident of harm caused by AI, nor does it describe a specific hazard event with plausible imminent harm. Instead, it provides a thoughtful commentary on the socio-technical system and the need for improved AI system design and management. Therefore, it fits best as Complementary Information, providing context and analysis relevant to AI impacts and governance without reporting a new incident or hazard.
Thumbnail Image

专家谈外卖骑手陷"算法困境":平台应提升外卖员保障 - 视点·观察 - cnBeta.COM

2020-09-11
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (algorithmic management systems used by delivery platforms) that influence labor conditions. However, it does not describe a direct or indirect AI Incident (no realized harm event) nor a specific AI Hazard (no imminent or plausible near-term harm event). Instead, it provides a critical analysis and advocacy for improved governance and worker protections, which fits the definition of Complementary Information as it enhances understanding of AI's societal impacts and governance needs without reporting a new harm or hazard.
Thumbnail Image

ÍâÂôÆïÊÖ³É"¸ßΣְҵ"£¿ Á½ÍâÂôƽ̨»ØÓ¦

2020-09-10
xiaofei.people.com.cn
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based dispatch and timing algorithms that directly influence riders' behavior and income. The strict delivery time requirements enforced by these AI systems have indirectly led to increased traffic violations and accidents among riders, causing injury and harm to their health, which fits the definition of an AI Incident. The platforms' new measures to add flexibility and incentives acknowledge the harm caused and aim to mitigate it, but the harm has already occurred due to the AI system's role in enforcing tight delivery schedules.
Thumbnail Image

谁帮他们祛除"疲于奔命"的辛酸

2020-09-10
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI or algorithmic system used by food delivery platforms that directly leads to harm to the health and safety of delivery riders through strict time constraints and punitive measures. The harm is realized in the form of increased traffic accidents and unsafe working conditions. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to a group of people.
Thumbnail Image

ÍâÂôÆïÊֳɸßΣְҵ£¿ Á½ÍâÂôƽ̨£ºÉ赯ÐÔʱ¼ä

2020-09-10
人民网
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based scheduling and dispatch systems that set strict delivery time targets. These AI systems' use has indirectly contributed to harm by pressuring riders to speed and violate traffic laws, increasing accident risk. The article cites data on rider injuries and traffic violations linked to these pressures. The platforms' announcements to add flexible timing and incentives are responses to this AI-induced harm. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to harm to people (riders).
Thumbnail Image

岩松:坚决反对外卖平台甩锅消费者 政府也不该将监管变纵容

2020-09-10
m.top.cnr.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of data algorithms that set delivery time limits and performance evaluations for delivery riders. These algorithms directly influence working conditions and have led to significant harm to workers' health and rights, as indicated by the discussion of the delivery industry as a high-risk occupation and the pressure on riders. This constitutes harm to groups of people (workers) due to the AI system's use, fitting the definition of an AI Incident. The article centers on the realized harm caused by the AI-driven system rather than potential future harm or governance responses alone.
Thumbnail Image

ÍâÂôƽ̨Ö÷¶¯¡°¼õËÙ¡±ÄÜ·ñÆÆ½âÆïÊÖÉú´æÀ§¾Ö£¿

2020-09-11
人民网
Why's our monitor labelling this an incident or hazard?
The delivery platforms employ AI-based systems to manage order assignments, delivery time targets, and rider performance evaluations. These AI systems enforce strict time constraints that pressure riders to engage in unsafe behaviors, causing injuries and fatalities, which constitutes harm to persons. The article documents actual harm (injuries and deaths) linked to these AI-influenced policies. The platforms' recent introduction of features to allow longer delivery times and flexible scheduling is a response to these harms but does not negate the fact that harm has occurred. Thus, the event meets the criteria for an AI Incident due to the AI system's indirect role in causing harm through its use and incentive structures.
Thumbnail Image

外卖界不久的将来或有大变革

2020-09-09
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems managing delivery logistics and timing, which have been criticized for negatively impacting delivery riders. However, the article focuses on the platforms' responses and planned feature updates to address these concerns, rather than reporting new harm or potential harm caused by AI systems. Therefore, this is Complementary Information providing updates on societal and governance responses to AI-related operational issues in food delivery.
Thumbnail Image

深夜回应外卖骑手话题,饿了么倡议多等5分钟,

2020-09-09
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of delivery platform algorithms in setting strict delivery time limits that pressure riders, leading to unsafe behaviors and accidents. These algorithms qualify as AI systems because they optimize delivery routes and times and evaluate rider performance. The harm to riders' health and safety is direct and ongoing, fulfilling the criteria for an AI Incident. The platform's response is a mitigation effort but does not negate the existence of harm caused by the AI system's use. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

王兴的算法包身工!1.4万亿市值美团是现代社会

2020-09-09
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly references the algorithmic control and monitoring of delivery riders by Meituan, an AI system managing labor conditions. It describes real harms experienced by workers, such as penalties for late deliveries, unsafe traffic behavior induced by time pressure, and overall oppressive working conditions. These harms relate to labor rights violations and harm to workers' health and safety, fitting the definition of an AI Incident. The AI system's use is central to these harms, not merely a potential risk or background context, so it is not a hazard or complementary information.
Thumbnail Image

饿了么、美团回应外卖骑手困在系统里

2020-09-09
guba.com.cn
Why's our monitor labelling this an incident or hazard?
The delivery platforms use algorithmic systems to manage orders and delivery times, which can be reasonably inferred to involve AI systems. The article highlights concerns about the pressure these systems place on riders, which is a known issue related to AI-driven gig economy platforms. However, no specific incident of harm caused by the AI system is described, nor is there a clear plausible risk of harm detailed. The main focus is on the platforms' responses and planned changes to mitigate rider stress. This fits the definition of Complementary Information, as it updates on governance and societal responses to AI-related concerns without reporting a new AI Incident or AI Hazard.
Thumbnail Image

送外卖就是与死神赛跑?万亿美团刷屏!"你愿意多给我5分钟吗?"绑架客户还是体谅小哥?美团最新回应 - 东方财富网

2020-09-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithmic systems (AI systems) in setting delivery time constraints and routing that pressure delivery riders into unsafe behaviors, causing real-world injuries and fatalities. The AI system's development and use are directly linked to harm to people, fulfilling the criteria for an AI Incident. The platforms' subsequent algorithmic changes and safety improvements are responses to this harm but do not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

饿了么将推2项新功能为外卖骑手减压:消费者拥有更多决定权

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The announcement involves an AI system managing order timing and rider performance, as the platform likely uses AI algorithms to estimate delivery times and manage rider assignments. However, the event does not describe any harm or potential harm caused by the AI system, nor does it report any incident or risk. Instead, it is about new features aimed at improving rider conditions and customer experience, which is a governance or operational update without direct or plausible harm. Therefore, it qualifies as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

外卖骑手困在系统里?饿了么、美团回应了_ 东方财富网 - 东方财富网

2020-09-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of delivery time estimation algorithms, machine learning models for surge prediction, and AI-driven task guidance via voice prompts. These systems are used in the development and use phases, shaping the working conditions of delivery riders. The harms described include health risks from pressure to deliver quickly, penalties for delays during adverse weather, and labor exploitation, which fall under harm to health and violations of labor rights. Although no single acute incident is described, the systemic and ongoing nature of these harms caused by AI systems' operation qualifies this as an AI Incident. The companies' responses and new features are complementary information but do not negate the presence of harm.
Thumbnail Image

透视外卖骑手的系统困境:平台、用户、骑手的三方合谋,此题无解_详细解读_最新资讯_热点事件_36氪

2020-09-10
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of algorithmic dispatch and potential autonomous delivery devices. The systemic pressure on riders and algorithmic scheduling are AI system uses that contribute to challenging working conditions, but no direct or indirect harm event is reported as having occurred. The mention of future autonomous delivery patents suggests plausible future AI hazards but does not describe an immediate hazard or incident. The main focus is on describing systemic issues and company responses, which aligns with Complementary Information rather than an Incident or Hazard.
Thumbnail Image

系统之外,400万外卖骑手在这个社区吹水吐槽_详细解读_最新资讯_热点事件_36氪

2020-09-10
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Meituan's order dispatch platform) that affects delivery riders' work and earnings. There are mentions of system bugs and penalties linked to AI-driven order management, which indirectly impact riders financially and emotionally. However, no direct or indirect harm as defined (such as injury, rights violations, or property/community/environmental harm) is reported. The article focuses on the riders' community discussions, platform responses, and social dynamics rather than a specific harmful event or a credible future risk. Thus, it fits the definition of Complementary Information, providing valuable context and updates on AI system use and its societal effects without constituting an AI Incident or AI Hazard.
Thumbnail Image

美团回应来了:给骑手留出8分钟弹性时间,并改进骑手奖励模式 | 钛快讯

2020-09-09
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The dispatch system is an AI system that algorithmically manages delivery timing and rider assignments. The article highlights that the system's current operation has contributed to unsafe conditions for riders, which is a harm to health and safety (an AI Incident). Meituan's response is a complementary update on mitigation but does not negate the fact that harm has occurred due to the AI system's use. Therefore, the event qualifies as an AI Incident because the AI system's use has indirectly led to harm to riders' health and safety.
Thumbnail Image

谁能解决外卖平台的"困局"?

2020-09-09
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the AI-driven dispatch and scheduling systems on food delivery platforms impose tight delivery times and unrealistic routing (e.g., calculating straight-line distances without accounting for real obstacles), which pressure riders to violate traffic rules and increase accident risk. This is a direct link between the AI system's use and harm to health and safety of delivery riders, fulfilling the criteria for an AI Incident. The platforms' responses to modify the algorithms and add buffer times further confirm the AI system's role in causing harm. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

美团饿了么没办法无视争议了

2020-09-10
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used by food delivery platforms to optimize delivery times and order assignments. These systems' design and use have led to indirect harms: pressure on riders causing unsafe behaviors and potential risks to public safety. The platforms' responses to criticism involve modifying AI system parameters to reduce these harms. Since the AI systems' use has directly or indirectly led to harm (to riders and potentially to public safety), this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

外卖小哥被算法奴役:饿了么和美团要想改变现状,有必要加强外部监管

2020-09-09
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithmic dispatch and route-planning systems by food delivery platforms that directly lead to harm to delivery workers, including traffic accidents and unsafe working conditions. The AI system's role is pivotal in creating unreasonable delivery expectations and routes that cause workers to take dangerous risks. The harm is realized and ongoing, meeting the criteria for an AI Incident. The platforms' partial mitigation efforts do not negate the direct link between the AI system's use and the harm caused.
Thumbnail Image

我为什么不愿多给外卖骑手5分钟?

2020-09-10
南方网
Why's our monitor labelling this an incident or hazard?
The food delivery platform's system uses algorithmic scheduling and performance metrics to compress delivery times, which directly pressures riders to take unsafe actions, causing harm to their health and safety. This constitutes harm to a group of people (riders) due to the use of an AI system (the platform's algorithmic management). Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to people (riders' health and safety).
Thumbnail Image

美团回应骑手问题:没做好就是没做好 将为骑手留出8分钟弹性时间

2020-09-09
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-driven delivery dispatch system that pressures riders to meet strict delivery times, leading to unsafe behaviors and risks to rider health and safety, which is a direct harm caused by the AI system's use. The company's response to optimize the system and improve safety measures confirms recognition of this harm. Therefore, this event meets the criteria for an AI Incident due to direct harm to people caused by the AI system's use.
Thumbnail Image

骑手文章刷屏,美团称暂不回应,饿了么回应了...

2020-09-09
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-driven algorithms to manage and optimize delivery times and routes, which directly influences riders' behavior and working conditions. The article reports that this system causes riders to engage in risky behavior, effectively making their job a high-risk occupation, which constitutes harm to health and safety (a). The harm is directly linked to the AI system's use in managing deliveries. Therefore, this qualifies as an AI Incident. The response by Ele.me is complementary information but does not negate the incident classification.
Thumbnail Image

聚焦外卖平台发声明“多等骑手5分钟”引争议背后

2020-09-10
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic system (AI system) by the delivery platform that drives riders to speed and take risks, causing safety accidents and labor hardships. This constitutes direct harm to the health and safety of individuals (delivery riders and potentially others), fulfilling the criteria for an AI Incident. The platform's algorithmic pressure and resulting rider behavior are central to the harm described. The subsequent platform response and public debate are complementary but do not change the classification of the original event as an AI Incident.
Thumbnail Image

白岩松谈外卖小哥为抢时间拼命!饿了么和美团作回应

2020-09-10
万家热线-合肥第一门户-合肥专业网络媒体
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems in the form of delivery time optimization algorithms that directly influence delivery workers' behavior and safety. The harm (increased traffic accidents and risks to delivery workers) has already occurred and is linked to the AI systems' design and use. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to harm to people (delivery workers). The article also discusses responses and potential mitigation but the primary focus is on the harm caused by the AI-driven system.
Thumbnail Image

10美团回应外卖问题:五项举措改进配送系统

2020-09-11
赛迪网
Why's our monitor labelling this an incident or hazard?
The event involves an AI-based dispatch system used for delivery logistics, which qualifies as an AI system. However, the article does not report any realized harm or incident caused by the AI system. Instead, it details the company's response to concerns and planned system optimizations to improve safety and rider welfare. This fits the definition of Complementary Information, as it provides updates on societal and governance responses and improvements related to an AI system without describing a new AI Incident or AI Hazard.
Thumbnail Image

影视中的“数字劳工”:我们都困在系统里,但愿我们都不屈服

2020-09-11
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems embedded in food delivery platforms that assign orders, monitor rider behavior, and enforce strict performance metrics, which directly lead to harms such as physical injury risks, labor rights violations, and exploitation. These harms have materialized and are ongoing, fulfilling the criteria for an AI Incident. The AI system's use in labor management is central to the described harms, including health risks and violations of labor rights, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

饿了么:将发布多等5分钟功能 还将为骑手提供鼓励机制

2020-09-09
Techweb
Why's our monitor labelling this an incident or hazard?
The article discusses a new feature and incentive mechanism introduced by Ele.me to address concerns about the pressure on delivery riders caused by algorithmic systems. While AI systems are involved in the delivery management and timing, the event itself does not describe any direct or indirect harm caused by AI, nor does it indicate a plausible future harm. Instead, it is a response to prior concerns and aims to improve conditions, making it complementary information rather than an incident or hazard.
Thumbnail Image

专家谈外卖骑手陷“算法困境”:平台应提升外卖员保障

2020-09-10
The Paper
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of delivery platform algorithms that manage and optimize delivery tasks. These algorithms' design and use have indirectly led to harm to the health and safety of delivery riders by imposing demanding delivery schedules and incentivizing risky behavior. The article reports on realized harms and systemic exploitation caused by these AI-driven systems, qualifying it as an AI Incident. The discussion of platform responses and calls for algorithmic governance further supports the presence of harm and the need for remediation.
Thumbnail Image

央视热评:能帮外卖骑手“脱困”的关键不在系统而在人

2020-09-10
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the delivery platforms' algorithmic systems set strict delivery times that pressure riders to violate traffic laws, leading to increased risk of accidents and harm. This is a clear example of an AI system's use indirectly causing harm to people, fitting the definition of an AI Incident. The harm is realized (riders are injured or at risk), and the system's role is pivotal in creating the conditions for this harm. The article also discusses company responses, but the primary focus is on the harm caused by the AI system's use.
Thumbnail Image

平台当牢记:系统是死的,人是活的

2020-09-10
The Paper
Why's our monitor labelling this an incident or hazard?
The delivery platform's algorithmic system is an AI system that directly influences delivery riders' behavior by imposing strict time constraints and penalties. This has led to riders engaging in dangerous behaviors such as running red lights and speeding, resulting in increased traffic accidents and health risks. The article clearly describes realized harm to people (injury risk) and labor rights violations due to the system's design and use. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to health and labor rights violations.
Thumbnail Image

1920条外卖投诉里,骑手和顾客都很受伤

2020-09-09
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithmic systems (AI systems) that automatically detect and penalize riders for supposed infractions like cheating or early delivery clicks. These automated decisions cause direct financial harm to riders and contribute to stressful and unsafe working conditions (e.g., racing against time, risking traffic violations). The system's errors and lack of human intervention in appeals exacerbate these harms. Therefore, the AI system's use has directly led to harm to a group of people (riders), fitting the definition of an AI Incident under harm to health and communities. The article does not merely discuss potential harm or general AI ecosystem issues but documents realized harm caused by the AI system's operation.
Thumbnail Image

如果延长五分钟,他们可能就会多捎一单,照样逆行闯红灯。

2020-09-10
The Paper
Why's our monitor labelling this an incident or hazard?
The delivery platforms' algorithmic systems set strict delivery time requirements and monitor rider performance, which directly influences riders to engage in unsafe behaviors like running red lights to avoid fines. This system-driven pressure has led to a rise in traffic accidents involving delivery workers, constituting harm to persons. The AI system's role is pivotal as it governs the delivery time expectations and penalty enforcement. Hence, this is an AI Incident due to indirect harm caused by the AI system's use in managing delivery logistics and penalties.
Thumbnail Image

外卖骑手困在系统里?饿了么将发布用户多等5-10分钟功能

2020-09-09
The Paper
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI systems to estimate delivery times and enforce penalties for late deliveries, which directly impacts riders' working conditions and safety, constituting harm to health and labor rights. The article reports on realized harm (stress, safety risks) caused indirectly by the AI system's use in managing delivery times. Therefore, this qualifies as an AI Incident. The new feature is a response to this harm but does not negate the incident classification.
Thumbnail Image

饿了么多等5分钟功能引热议,有人称“这是卖情怀”

2020-09-09
The Paper
Why's our monitor labelling this an incident or hazard?
The article involves an AI system implicitly, as the delivery time estimates and the new feature are managed by the platform's algorithmic system, which likely includes AI components for logistics and timing predictions. However, the event does not describe any harm caused by the AI system's malfunction, misuse, or development. Instead, it reports on a new feature intended to improve courier safety and customer choice, with no direct or indirect harm reported or plausible future harm indicated. The discussion centers on platform policy and user experience rather than an incident or hazard. Thus, it fits the definition of Complementary Information, providing an update on AI system use and societal responses without constituting an AI Incident or AI Hazard.
Thumbnail Image

马上评|用户愿意“多等5分钟”,但企业不能推卸责任

2020-09-09
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly references the platform's algorithmic system that sets delivery times and routes, which influences rider behavior and safety risks. This qualifies as an AI system influencing real-world outcomes. However, the article does not report a specific AI Incident where harm has directly or indirectly occurred due to the AI system's malfunction or misuse. Instead, it discusses the broader systemic issues, public reactions, and regulatory responses to these AI-driven pressures. This fits the definition of Complementary Information, as it enhances understanding of AI's societal impact and governance without describing a new incident or hazard.
Thumbnail Image

商评|外卖平台改规则,骑手真能逃脱"算法"?

2020-09-09
caixin.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the algorithmic dispatch and reward system) whose use has indirectly led to harm to the health and safety of delivery riders, a recognized high-risk group. The system's pressure causes riders to rush, increasing accident risk, which constitutes harm under the AI Incident definition. The platform's response and planned changes are complementary information but do not negate the incident classification. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

饿了么将推出多等5分钟功能 用优秀骑手个别订单超时不担责

2020-09-09
Techweb
Why's our monitor labelling this an incident or hazard?
The article involves an AI system implicitly through the mention of algorithmic and data-driven delivery systems that influence rider behavior and delivery timing. However, the event itself is about a new feature to mitigate pressure on riders and improve service, not about a harm or a plausible harm caused by AI. There is no direct or indirect harm reported from the AI system's use or malfunction, nor is there a credible risk of future harm described. Instead, it is a response to known issues, aiming to improve conditions. Therefore, this is Complementary Information as it provides context and a governance/response update related to AI-driven delivery systems.
Thumbnail Image

外卖骑手离得开这个系统吗?

2020-09-10
英国金融时报中文版
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how algorithms and data-driven systems control and pressure delivery riders, leading to dangerous behaviors and traffic accidents that threaten their lives. The AI system's role in enforcing delivery time constraints and penalizing riders directly contributes to these harms. This fits the definition of an AI Incident, as the AI system's use has directly led to injury or harm to a group of people (the riders).
Thumbnail Image

拿命博钱?外卖员已成高危职业!究竟是谁的错?

2020-09-08
雪球
Why's our monitor labelling this an incident or hazard?
The AI system is involved in calculating delivery distances and times, which directly influences the delivery workers' schedules and pay. The system's use indirectly leads to harm to the health of delivery workers due to the high-intensity work pressure and stress caused by strict time constraints and penalties for delays. This constitutes an AI Incident as the AI system's use has directly contributed to harm to a group of people (delivery workers).
Thumbnail Image

2020-09-09
雪球
Why's our monitor labelling this an incident or hazard?
The platform's delivery management system likely uses AI algorithms to optimize order assignments and delivery timing. The concern about whether drivers will comply with the intended slower delivery to improve safety or exploit the extra time to take more orders relates to the AI system's design and use. However, no actual harm or incident has occurred yet; the article discusses a potential issue or challenge in the system's use and design that could plausibly lead to safety risks if not properly managed. Therefore, this qualifies as an AI Hazard, as it highlights a plausible future harm related to the AI system's use in delivery timing and driver behavior management.
Thumbnail Image

饿了么多等5分钟功能引热议,有人称“这是卖情怀”

2020-09-09
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system implicitly, as the delivery time estimates and order management are controlled by the platform's system, which likely uses AI algorithms for routing and timing predictions. However, the event does not describe any direct or indirect harm caused by the AI system, nor does it indicate a plausible future harm. Instead, it reports on a new feature designed to mitigate existing risks and improve safety, with no incident or hazard occurring. Therefore, this is best classified as Complementary Information, providing context and updates on AI system use and societal responses in the food delivery ecosystem.
Thumbnail Image

专家谈外卖骑手陷"算法困境":平台应提升外卖员保障

2020-09-10
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly references algorithmic systems used by food delivery platforms that influence delivery riders' working conditions, creating a scenario where riders face increased risks and pressures. This constitutes harm to a group of people (the delivery riders) due to the AI system's use and design. The harm is indirect but real, as the algorithmic prioritization of consumers over riders leads to unsafe working conditions and exploitation. Therefore, this qualifies as an AI Incident under the definition of harm to groups of people caused by the use of AI systems.
Thumbnail Image

美团宣布改进外卖配送系统:每单给骑手留8分钟弹性时间

2020-09-09
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in Meituan's delivery dispatch and scheduling system, which is being improved to better accommodate rider safety and operational challenges. However, the article does not report any realized harm or incidents caused by the AI system, nor does it describe a plausible imminent risk of harm. Instead, it focuses on system improvements and safety enhancements, which are responses to prior concerns. Therefore, this is best classified as Complementary Information, providing updates on AI system improvements and governance responses rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

外卖骑手:平台转嫁顾客"买单"不合理

2020-09-10
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the impact of algorithmic systems (AI systems) used by delivery platforms to set delivery times, which directly influence rider behavior and safety. The increased number of traffic accidents among riders is a harm to their health and safety, fitting the definition of an AI Incident. The AI system's use in controlling delivery time and order management is a contributing factor to this harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use.
Thumbnail Image

ÍâÂôÆïÊÖ½Ò¿ªÁ˹²Ïí¾­¼ÃÓù¤Ä£Ê½µÄ±×²¡

2020-09-10
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the algorithmic systems of food delivery platforms drive riders to work under unsafe conditions, leading to health and safety risks. These systems are AI-based or algorithmic management tools that assign tasks and optimize delivery times, directly influencing rider behavior and pressure. The harms include injury risk and labor rights violations due to lack of formal employment and social protections. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident as the harm is realized and linked to the AI system's use. The article also discusses the broader systemic issues of the shared economy labor model driven by these AI systems.
Thumbnail Image

老文新读:如何让外卖骑手更从容地养家糊口

2020-09-10
和讯网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of intelligent dispatch and delivery time optimization algorithms used by delivery platforms. While it discusses real-world harms such as traffic accidents caused indirectly by system-imposed delivery pressures, it does not attribute these harms directly to AI malfunction or misuse, nor does it report a specific AI-driven incident causing harm. Instead, it reports on platform responses and system adjustments to reduce risks, which fits the definition of Complementary Information. The article also provides background on the development and use of AI in logistics platforms, rider experiences, and ongoing efforts to improve safety and working conditions, all of which enhance understanding of AI's societal implications without describing a new incident or hazard.
Thumbnail Image

美团回应骑手事件:优化系统给骑手8分钟弹性时间

2020-09-09
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the AI-driven dispatch system's role in setting delivery times that pressured riders into unsafe behaviors, which have caused harm (traffic accidents and rider injuries). The system's use directly contributed to these harms, making this an AI Incident. The current article focuses on the company's response and system optimization to mitigate these harms, which is complementary information to the prior incident. However, since the original harm occurred due to the AI system's use, and this article references that harm and the response, the classification aligns with AI Incident due to the realized harm linked to the AI system's operation.
Thumbnail Image

外卖平台甩的锅,善解人意的消费者不接!

2020-09-09
和讯网
Why's our monitor labelling this an incident or hazard?
The delivery platform uses an AI system to set and enforce delivery time targets that are increasingly stringent. This system's use directly leads to delivery workers taking dangerous risks to meet these targets, resulting in a rise in traffic accidents and harm to their health and safety. The harm is realized and ongoing, meeting the criteria for an AI Incident. The platform's attempt to mitigate the issue by asking consumers to accept longer wait times does not remove the AI system's causal role in the harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

×ʱ¾¡¢Ëã·¨¡¢ÈËÎĵġ°²»¿ÉÄÜÈý½Ç¡±

2020-09-10
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the delivery platform's algorithm) that controls delivery times and routes, influencing rider behavior. The system's strict timing and routing requirements have directly led to physical harm (injuries and deaths) among delivery riders, fulfilling the criteria for an AI Incident under harm to health (a). The article discusses the direct consequences of the AI system's use and the resulting harm, not just potential risks or general commentary, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Æðµ×ÆïÊÖÉúÒ⣺ҰÂùÉú³¤µÄÁé»îÓù¤Êг¡

2020-09-11
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions system algorithms and data used by food delivery platforms to manage riders, which qualifies as AI systems under the definition. The harms include injury risks from traffic accidents linked to algorithmic pressure and labor exploitation issues, which are direct or indirect harms to people and violations of labor rights. The AI system's use in dispatching and performance management is a contributing factor to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are ongoing and realized.
Thumbnail Image

¶öÁËô£º¶àµÈ5·ÖÖÓ ÃÀÍÅ£º8·ÖÖÓµ¯ÐÔʱ¼ä

2020-09-09
和讯网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of delivery platform algorithms (AI systems) in setting delivery time expectations that pressure riders to take unsafe actions, leading to increased accidents and injuries. This constitutes indirect harm to the health and safety of a group of people (delivery riders), fulfilling the criteria for an AI Incident. The platforms' responses are complementary information but do not negate the incident classification. The presence of AI systems is clear from the description of algorithmic dispatch and timing controls. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. The article is not just general AI news or product launch, so it is not Unrelated or Complementary Information alone.
Thumbnail Image

ÏêÇé>

2020-09-10
和讯网
Why's our monitor labelling this an incident or hazard?
The platforms' dispatch and scheduling systems use AI algorithms to optimize delivery times and order assignments, which directly impact the riders' behavior and safety. The pressure to meet AI-determined delivery times leads to risky actions by riders, causing harm to their health and safety. The article documents realized harm and public concern about these AI-driven pressures. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly and indirectly led to harm to persons.
Thumbnail Image

困在系统里的,不止外卖小哥

2020-09-10
opinion.haiwainet.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (algorithm) that optimizes delivery times and routes, which results in delivery workers being pressured to deliver quickly, causing fatigue and traffic accidents. This is a direct link between the AI system's use and harm to people (harm to health and safety). The harm is realized, not just potential, as accidents have occurred. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to injury or harm to a group of people.
Thumbnail Image

饿了么推出多等5分钟功能按钮

2020-09-09
hi.online.sh.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (algorithmic order management) that directly influences the working conditions and income of delivery riders, leading to harm (stress, overwork, income loss). This harm is realized and ongoing. The new feature is a mitigation measure but does not change the fact that harm has occurred due to the AI system's use. Hence, this is an AI Incident.
Thumbnail Image

饿了么推出“多等5分钟”功能

2020-09-10
hi.online.sh.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of delivery logistics algorithms that optimize delivery times and incentives. The new feature is a response to previously observed harms (traffic accidents) indirectly linked to AI-driven delivery time pressures. However, the article does not describe any new harm caused or any imminent risk of harm from the AI system itself. Instead, it reports on a governance or operational response to mitigate existing issues. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and technical responses to AI-related challenges in delivery services, without describing a new AI Incident or AI Hazard.
Thumbnail Image

上海每2.5天就有1名外卖员伤亡 "多等5分钟"实则治标不治本

2020-09-11
上海热线
Why's our monitor labelling this an incident or hazard?
The delivery platform uses an AI system to dynamically calculate delivery times and assign orders, which directly influences riders' behavior and safety. The system's pressure to meet tight deadlines has led to frequent injuries and deaths among delivery riders, a clear harm to health (a). The platform's attempt to mitigate this by adding a consumer 'wait extra 5 minutes' button shifts responsibility but does not remove the AI system's role in causing harm. Therefore, the event involves an AI system whose use has directly led to harm, fitting the definition of an AI Incident.
Thumbnail Image

顾客多等不能救外卖骑手

2020-09-11
上海热线
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (algorithms) in food delivery platforms that drive delivery riders to take unsafe actions, leading to direct harm to their health and safety. The harm is realized and directly linked to the AI system's use in managing delivery logistics and performance expectations. Therefore, this qualifies as an AI Incident due to injury or harm to persons caused indirectly by the AI system's operational use.
Thumbnail Image

外卖骑手被困,饿了么给出错误答案

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of the platform's order dispatch algorithm that controls and pressures delivery riders to complete orders rapidly, leading to unsafe working conditions and potential injury or harm to the riders. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to a group of people (delivery riders). The article details realized harms and systemic issues caused by the AI system's operation, not just potential or future risks. Therefore, this is an AI Incident.
Thumbnail Image

"多等五分钟"拯救不了外卖骑手

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The food delivery platform's system uses algorithmic decision-making to assign and time orders, which is reasonably inferred as an AI system due to its real-time optimization and data-driven order dispatching. The system's operation directly leads to riders facing unsafe conditions and risks to their personal safety, constituting harm to health (a). The article details these harms as occurring, not hypothetical, and links them to the AI system's use. Therefore, this is an AI Incident involving harm caused by the AI system's use.
Thumbnail Image

外卖小哥回应"被困在系统里":文章里有些东西不准确 我们是自由的

2020-09-10
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
Although the article involves algorithmic dispatch systems (which can be considered AI systems), it primarily serves as a response and clarification to previous reports. It does not describe any incident where the AI system caused harm or a hazard that could plausibly lead to harm. The focus is on explaining the operational details, rider autonomy, and platform responses to concerns, without reporting realized or potential harms. Therefore, it fits the category of Complementary Information, providing context and updates rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

谁不是困在系统里,中国互联网没有创新

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithmic systems and patented AI technologies by Meituan and Ele.me to optimize delivery operations. These AI systems directly influence the working conditions of delivery workers, leading to overwork, stress, and unsafe practices, which are harms to the health and well-being of these workers. The harm is indirect but clearly linked to the AI systems' use in dispatching and routing. Hence, the event meets the criteria for an AI Incident due to indirect harm caused by AI system use.
Thumbnail Image

解困外卖骑手,饿了么将开发"多等5/10分钟"新功能

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article discusses a new feature in the delivery platform's system, which likely involves AI for managing delivery times and logistics. However, the event is about a feature to help riders by allowing customers to extend delivery time voluntarily, with incentives. There is no harm or risk described, nor any incident or hazard. This is a product feature update and a positive response to user feedback, thus it is Complementary Information rather than an Incident or Hazard.
Thumbnail Image

骑手肇事逃逸,"饿了么"被判赔48万余元!法院:外卖企业社会责任

2020-09-10
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system through the platform's algorithmic management of delivery riders, which influences their behavior and indirectly causes harm (traffic accidents and injuries). The court ruling confirms the platform's responsibility, linking the AI system's use to actual harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to injury and harm to a person, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

困在系统里的,何止外卖小哥?

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI algorithms and systems are used to calculate and enforce increasingly strict delivery time limits for food delivery workers, which forces them to speed and break traffic rules, resulting in frequent injuries and dangerous conditions. The AI system's development and use have directly contributed to these harms, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves injury to people and harm to communities, which fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

外卖小哥拼命,谁"饿"了?"美"了谁?该谁管?白岩松连抛13个追问

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of advanced algorithms used by food delivery platforms to optimize delivery times and manage workers. These algorithms have directly led to harm by pressuring delivery workers into unsafe and exploitative working conditions, which constitutes harm to health and violations of labor rights. The article reports on these harms and the responses from platforms and experts, making it an AI Incident under the OECD framework because the AI system's use has directly led to realized harm.
Thumbnail Image

​他们不光偷走了外卖小哥的配送时间,还偷走了钱

2020-09-11
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use of an intelligent dispatch system (an AI system) that optimizes delivery operations, resulting in reduced labor costs and lower pay for delivery workers. This constitutes indirect harm to labor rights and economic well-being of a large group of people (delivery workers). The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to violations of labor rights and harm to a group of people.
Thumbnail Image

美团外卖终于回应:将给骑手留出8分钟弹性时间

2020-09-09
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the dispatch and scheduling system) used in delivery logistics. The article discusses system optimizations and safety measures to address concerns but does not describe any realized harm or incident caused by the AI system. There is no indication of direct or indirect harm to people, infrastructure, rights, property, or communities. The content is primarily about the company's response and planned improvements, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

“多等五分钟”拯救不了外卖骑手

2020-09-09
成都全搜索
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the delivery platform's algorithmic order dispatch and timing system) that directly influences the working conditions and safety risks of delivery riders. The system's pressure to maximize order throughput leads to harm in the form of physical safety risks and stress to the riders, which is a form of harm to persons. The article describes realized harm caused indirectly by the AI system's use and management. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (delivery riders).
Thumbnail Image

美团回应骑手问题:将更好优化系统,给骑手留出8分钟弹性时间

2020-09-09
成都全搜索
Why's our monitor labelling this an incident or hazard?
The article discusses system optimization and safety improvements related to Meituan's delivery platform, which likely involves AI-based dispatch and routing systems. However, it does not describe any realized harm or incident caused by AI system malfunction or misuse. Instead, it focuses on the company's commitments and planned improvements, making it complementary information that updates and contextualizes prior concerns rather than reporting a new AI Incident or Hazard.
Thumbnail Image

“外卖骑手,困在系统里”该怎么解决?专家建议……

2020-09-10
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the platforms' dispatch and routing algorithms as central to the problems faced by delivery riders, including unsafe routes and unreasonable delivery time pressures leading to penalties. These harms affect the riders' health and labor rights, fulfilling the criteria for harm caused directly or indirectly by AI system use. The discussion of platform responses and expert calls for algorithmic transparency and inclusive governance further supports the AI system's pivotal role in the incident. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is ongoing and linked to AI system use.
Thumbnail Image

饿了么推出多等5分钟新功能!你愿意为外卖小哥等吗?

2020-09-09
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithmic systems (AI systems) in managing delivery times and rider behavior, which has directly and indirectly caused harm to delivery riders through increased traffic accidents and unsafe working conditions. The harm to health and safety of a group of people (delivery riders) is clearly articulated and linked to the AI system's use and design. The platforms' responses and new features are complementary information but do not negate the fact that harm has occurred. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

2020-09-09
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI algorithms in food delivery platforms dictate delivery times, routes, and penalties, which have led to unsafe working conditions and increased risks for delivery riders. This constitutes harm to the health and safety of a group of people (delivery workers). The AI systems' use is a direct contributing factor to these harms, fulfilling the criteria for an AI Incident. The article also discusses responses and improvements, but the primary focus is on the realized harm caused by the AI systems' use.
Thumbnail Image

美团外卖:将给骑手留出8分钟弹性时间

2020-09-09
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The dispatch system uses AI algorithms to schedule deliveries and evaluate rider performance, which has led to riders taking unsafe risks to meet delivery deadlines, causing harm to their safety and well-being. The article explicitly links the AI system's operation to these harms and the company's acknowledgment and planned remediation measures confirm the AI system's role. Hence, this is an AI Incident due to indirect harm to riders caused by the AI system's use and design.
Thumbnail Image

外卖骑手,困在系统里?美团回应了

2020-09-09
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Meituan's delivery dispatch and navigation system) and discusses its development and use. However, it does not describe any realized harm or incident caused by the AI system, nor does it indicate a credible risk of future harm. The main content is about the company's response and improvements to the system and rider welfare, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

每经热评|骑手脱困靠算法 而不是甩锅消费者

2020-09-10
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithms (AI systems) in managing delivery riders and the resulting challenges. However, it does not describe any realized harm (injury, rights violation, disruption, or property/community/environmental harm) caused by these algorithms, nor does it report a near-miss or credible risk of such harm. The focus is on systemic issues and calls for improvement, which fits the definition of Complementary Information as it provides context, critique, and governance-related discussion about AI systems in the delivery platform ecosystem without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

美团外卖:将优化系统,给骑手留出8分钟弹性时间

2020-09-09
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic dispatch system that drives delivery riders to work under unsafe conditions, causing them to violate traffic laws and face health and safety risks. This is a direct harm to the health and safety of a group of people caused by the AI system's use. The company's announcement of system improvements is a response to this harm but does not itself remove the fact that harm occurred. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

"饿了么"推多等五分钟新功能 网民:转嫁矛盾

2020-09-09
早报
Why's our monitor labelling this an incident or hazard?
The platform uses algorithmic systems to estimate delivery times and assign orders, which qualifies as AI system involvement. The new features relate to managing delivery time expectations and rider penalties, reflecting the AI system's role in operational decisions. However, the article does not report any actual harm (such as injury, rights violations, or significant disruption) caused by the AI system or its malfunction. Nor does it describe a credible risk of future harm directly linked to the AI system. The main focus is on the introduction of new features and public debate about their fairness and effectiveness, which fits the definition of Complementary Information. Hence, the event is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

维舟 | 弱者的困境

2020-09-11
China Digital Times
Why's our monitor labelling this an incident or hazard?
While the article clearly involves AI systems in the form of algorithmic management of delivery riders, it does not report a concrete incident of harm caused by AI, nor does it describe a specific hazard event where harm could plausibly occur. The focus is on social critique and systemic analysis rather than a particular AI incident or hazard. Therefore, it is best classified as Complementary Information, as it provides context and understanding about AI's societal impact without reporting a new incident or hazard.
Thumbnail Image

动解|3分钟看"系统"逻辑:外卖骑手为何越跑越快?

2020-09-10
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (O2O instant delivery intelligent dispatch system) that influences delivery times and rider workload. The system's use leads to pressure on riders, which can be interpreted as harm to a group of people (workers) through increased stress or unsafe working conditions. This harm arises from the use of the AI system in managing delivery times and order assignments. Therefore, this qualifies as an AI Incident due to indirect harm to workers caused by the AI system's operational use.
Thumbnail Image

骚客文艺|多给外卖骑手五分钟,可能会把他们逼得更紧

2020-09-09
China Digital Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based management and assessment systems ('外卖管理系统冷酷的考核') that monitor delivery times and customer complaints, which directly influence riders' behavior and working conditions. This system's operation leads to physical risks (dangerous riding practices) and labor rights issues (lack of protections, penalties). The AI system's role is pivotal in causing these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

深蓝财经 | 外卖平台握着刀,却问用户杀不杀

2020-09-09
China Digital Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based intelligent dispatch systems used by the platforms to allocate orders and optimize delivery times. These AI systems' outputs have led to harmful working conditions for delivery riders, including pressure to deliver faster than is safe or reasonable, penalties for delays, and lack of adequate support. The harms include injury or harm to health (stress, accidents from rushing), and violations of labor rights (unfair penalties, lack of labor protections). The AI systems' role is pivotal as they control order distribution and timing expectations. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly and indirectly led to harm.
Thumbnail Image

美团回应骑手事件

2020-09-10
杭州网
Why's our monitor labelling this an incident or hazard?
The dispatch system used by Meituan is an AI system that schedules delivery times and routes. Its strict timing requirements have indirectly caused harm by pressuring riders to engage in risky behaviors, resulting in traffic accidents and injuries. The company's acknowledgment and planned system changes confirm the AI system's role in causing harm. Therefore, this event qualifies as an AI Incident due to indirect harm to health and safety caused by the AI system's use and design.
Thumbnail Image

你觉得困住骑手的问题在哪?

2020-09-10
sike.news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI algorithms (intelligent dispatching, route planning, and delivery time estimation) used by platforms like Meituan and Ele.me. These AI systems directly influence rider behavior and delivery expectations, leading to time pressure and unsafe conditions. The harm is indirect but real, as riders face safety risks and penalties due to AI-driven scheduling and navigation. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to persons (riders).
Thumbnail Image

美团宣布改进配送系统:系统的问题 需要系统背后的人来解决

2020-09-09
华商网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the delivery dispatch and routing system) and its use, but there is no indication that the AI system has directly or indirectly caused harm or incidents. The article is primarily about the company's response and planned improvements to address potential issues and enhance safety and fairness. Therefore, this is Complementary Information as it provides updates and responses related to the AI system and its ecosystem without describing a new AI Incident or AI Hazard.
Thumbnail Image

解困外卖骑手,饿了么将开发"多等5分钟/10分钟"新功能

2020-09-09
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article discusses a new feature in the food delivery platform's system, which likely involves AI for managing delivery times and rider assignments. However, the event does not describe any harm or risk of harm caused by the AI system. Instead, it is a response to concerns about delivery riders being pressured by strict system timings. This is an operational improvement and does not constitute an AI Incident or AI Hazard. It is not a general product launch without context, as it addresses a specific issue, but since no harm or risk is involved, it fits best as Complementary Information.
Thumbnail Image

外卖员被算法支配 饿了么向用户推出多等5分钟功能引发热议

2020-09-09
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The delivery platform's algorithmic system, which schedules and manages delivery times, is an AI system as it involves automated decision-making affecting rider behavior and delivery performance. The system's pressure on riders has indirectly led to harm by encouraging risky behavior, such as traffic violations and high-stress conditions, which constitute harm to the health and safety of a group of people (delivery riders). Therefore, the event qualifies as an AI Incident due to the AI system's indirect role in causing harm. The new feature and public discussion are responses to this incident but do not negate the existence of harm caused by the AI system's use.
Thumbnail Image

互联网巨头如何操控了我们的生活?

2020-09-10
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithms controlling delivery riders' workloads, pricing algorithms by Uber, data-driven manipulation by Google and Facebook) that have directly led to harms including labor exploitation, unfair market competition, and manipulation of user behavior and political views. These constitute violations of labor rights, economic harm, and harm to communities, all fitting the AI Incident definition. The harms are ongoing and documented, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

美团外卖宣布五项举措:调度系统会给骑手留出8分钟弹性时间

2020-09-09
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the dispatch system with scheduling algorithms) whose use is being optimized to improve rider safety and working conditions. There is no report of any harm or incident caused by the AI system; rather, the company is proactively addressing concerns and improving the system. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information, as it provides updates on societal and technical responses to AI-related concerns in the delivery platform ecosystem.
Thumbnail Image

谁在让外卖小哥"与生命赛跑"?

2020-09-09
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI-driven algorithms and data systems used by delivery platforms determine delivery times and order assignments, creating a high-pressure environment for delivery riders. This AI system's use has directly led to harm risks for riders, including traffic violations and life-threatening situations. The harm is realized or ongoing, as riders are described as racing against time and risking their lives due to the AI system's operational logic. Therefore, this qualifies as an AI Incident under the definition of harm to health and safety caused directly or indirectly by the use of AI systems.
Thumbnail Image

独家回应|饿了么:希望把选择权交给用户,骑手端看不到增加时长

2020-09-10
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
While the platform likely uses AI or algorithmic systems to manage delivery logistics and timing, the article does not report any direct or indirect harm caused by these AI systems, nor does it describe a plausible future harm from AI malfunction or misuse. The discussion centers on operational improvements and user choice to benefit delivery workers, with no evidence of an AI incident or hazard. Therefore, this is best classified as Complementary Information, providing context on ongoing AI-related system adjustments and responses to public concerns.
Thumbnail Image

外卖骑手安危:市场的痛点,企业的创新沸点

2020-09-11
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
While the article references algorithmic systems and incentive mechanisms that influence delivery riders' behavior, it does not report any realized harm or incident caused by AI systems. The discussion centers on market pain points and potential solutions, including algorithmic improvements and insurance collaborations, which are prospective and strategic rather than describing an actual AI-related harm event. Therefore, this is best classified as Complementary Information, providing context and insight into AI system use and challenges in the delivery platform ecosystem without reporting a specific AI Incident or Hazard.
Thumbnail Image

《外卖骑手,困在系统里》刷屏 美团回应:没做好就是没做好 没有借口

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The dispatch system algorithm is an AI system that directly influences delivery riders' routes, timing, and penalties. The article details how this system's use has led to unsafe conditions and high-risk behavior among riders, constituting harm to their health and safety. The platform's acknowledgment and planned improvements confirm the system's role in causing harm. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

美团饿了么,困在公关系统里

2020-09-10
金融界网
Why's our monitor labelling this an incident or hazard?
Although the delivery platforms likely employ AI systems for logistics and task assignment, the article does not explicitly or implicitly attribute the harms (traffic accidents, rider stress) directly or indirectly to AI system malfunction, misuse, or development. The focus is on the platforms' public relations strategies and social responses to the investigative report about delivery riders' working conditions. No new AI incident or hazard is described; rather, the article provides complementary information about corporate and societal reactions to the issue. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

美团外卖:将对系统持续优化,保障骑手安全行驶的时间

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The dispatch system uses algorithms and data-driven scheduling, which qualifies as an AI system under the definition. The system's operation has indirectly led to harm by pressuring riders into unsafe behavior, constituting harm to health and safety (a). The article reports realized harm (riders racing against death, high-risk occupation) caused indirectly by the AI system's use. Meituan's response focuses on mitigating these harms through system optimization and safety measures. Therefore, this event qualifies as an AI Incident due to the AI system's indirect contribution to harm and the ongoing response to address it.
Thumbnail Image

饿了么将推出"多等5分钟/10分钟"新功能 为优秀外卖骑手提供鼓励机制

2020-09-08
金融界网
Why's our monitor labelling this an incident or hazard?
The article discusses a new feature involving an AI-supported system for delivery time management and incentives, but it does not describe any harm or potential harm caused by the AI system. There is no indication of injury, rights violations, or other harms directly or indirectly caused by AI. The focus is on a positive feature and company response to delivery rider risks, not on an incident or hazard. Therefore, this is Complementary Information providing context and updates on AI system use in delivery services.
Thumbnail Image

饿了么将推出多等5分钟功能 你会使用吗?

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the delivery platform's algorithm) that influences rider behavior and delivery timing. However, the article does not describe any realized harm or a plausible risk of harm directly caused by the AI system's development, use, or malfunction in this new feature. Instead, it reports on a new feature intended to mitigate pressure on riders and public reactions to it. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-driven issues in the delivery platform ecosystem.
Thumbnail Image

美团回应"外卖员被困系统":调度系统会给骑手留出8分钟弹性时间

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The dispatch system involves AI or algorithmic decision-making for task allocation and timing, which qualifies as an AI system. The article discusses the system's use and the company's efforts to enhance safety and incentives, but it does not report any realized harm or direct malfunction leading to harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about ongoing improvements and responses to concerns, fitting the definition of Complementary Information.
Thumbnail Image

"外卖骑手"讲述生死时速 美团回应:没有做好,我们责无旁贷

2020-09-10
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the dispatch system's algorithmic challenges affecting delivery riders' safety and workload, indicating the presence of an AI system managing delivery assignments and timing. The harm involves risks to riders' health and safety due to system pressures and scheduling, which is a form of indirect harm caused by the AI system's use. Meituan's response acknowledges these issues and commits to improvements, confirming the system's role in the harm. Hence, this qualifies as an AI Incident due to indirect harm to people caused by the AI system's use.
Thumbnail Image

谁把外卖骑手"困在系统里"?

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of platform algorithms in pressuring delivery riders to meet strict delivery times, which leads to unsafe behaviors and risks of injury. The AI system's use in scheduling and performance evaluation is central to the harm experienced by the riders. The harm is realized (riders violating traffic rules and risking accidents), and the platforms acknowledge the problem and propose algorithmic changes. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

"多等5分钟"能否化解外卖配送竞速之困

2020-09-09
金融界网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of delivery scheduling and dispatch algorithms that influence courier behavior and delivery times. The discussion centers on platform policy changes to mitigate safety risks caused by the pressure of strict delivery time requirements. There is no report of an actual harm event caused by AI malfunction or misuse, nor a near-miss or credible imminent risk of harm. Instead, the article provides updates on platform responses and industry discussions about balancing delivery speed and safety, which fits the definition of Complementary Information. It enhances understanding of AI's societal impact and governance responses without describing a new AI Incident or AI Hazard.
Thumbnail Image

参与创造数十亿利润 却难分一杯羹?外卖小哥拼命 谁"饿"了?"美"了谁?

2020-09-10
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithmic systems (AI systems) in setting delivery time limits that pressure delivery workers to engage in unsafe behaviors such as speeding, running red lights, and other traffic violations, which have led to increased accidents and injuries. This constitutes indirect harm to the health of a group of people (delivery workers), fulfilling the criteria for an AI Incident. The AI system's use in operational decision-making is a contributing factor to these harms. The platform responses are noted but do not negate the occurrence of harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

新民快评|让每一单外卖暖胃又暖心

2020-09-10
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses how the delivery platforms' algorithmic systems impose strict time constraints on riders, causing them to take risks that have resulted in traffic accidents, injuries, and fatalities. This constitutes indirect harm to the health and safety of a group of people due to the use of AI systems in managing delivery logistics. Therefore, this event qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the use of AI systems.
Thumbnail Image

算法留给骑手多少温度?

2020-09-10
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI algorithms for order assignment and route planning in food delivery platforms, which fits the definition of an AI system. The system's operation indirectly pressures riders into unsafe behaviors, posing plausible risks to their health and safety (harm category a). Since no actual injury or accident is reported, but the system's design and use create credible risks, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses platform responses and societal concerns, but the main focus is on the potential harm arising from the AI system's use.
Thumbnail Image

美团被指对配送员压榨

2020-09-09
China Finance Online
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithmic management systems that set delivery time limits and control worker behavior, which are AI systems under the definition. The use of these AI systems has directly led to harm in the form of worker exploitation, excessive pressure, and indirectly to a fatal incident. Therefore, this qualifies as an AI Incident due to harm to groups of people (workers) caused by the use of AI systems in platform management.
Thumbnail Image

央视:能帮外卖骑手“脱困”的关键不在系统而

2020-09-10
星岛环球网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of an algorithmic dispatch and delivery time management system that directly contributes to harm to the health and safety of delivery riders by pressuring them to violate traffic laws and work under dangerous conditions. This constitutes an AI Incident because the AI system's use has directly led to harm (injury risk and stress) to a group of people (delivery riders). The article's focus is on the harm caused by the AI system's use, not just on general commentary or complementary information.
Thumbnail Image

¶àµÈ5·ÖÖÓ,ÍâÂôС¸ç¾Í°²È«ÁË?ÃÀÍű»Ö¸×ª¼ÞÔðÈÎ

2020-09-11
中国经济网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems indirectly through the mention of platform dispatch and scheduling algorithms that influence delivery times and rider safety. However, it does not report any realized harm (injury, rights violation, or other harms) directly caused by these AI systems, nor does it describe a specific event where AI malfunction or misuse led to harm. Instead, it focuses on platform responses, legal opinions, and suggestions for better regulation and algorithmic transparency. Therefore, it fits the category of Complementary Information, providing context and updates on societal and governance responses related to AI-driven platform management affecting delivery riders.
Thumbnail Image

ÃÀÍÅÐû²¼¸Ä½øÍâÂôÅäËÍϵͳ£ºÃ¿µ¥Áô8·ÖÖÓµ¯ÐÔʱ¼ä

2020-09-09
中国经济网
Why's our monitor labelling this an incident or hazard?
The article discusses ongoing improvements and responses by Meituan to issues related to its AI-powered delivery dispatch system and rider safety. There is no indication of an AI system malfunction or harm caused by AI use. The content is primarily about the company's mitigation and enhancement efforts, which fits the definition of Complementary Information as it provides updates and responses to previously known challenges without reporting new harm or hazards.
Thumbnail Image

ÍâÂôƽ̨±»Ö¸Ñ¹ËõÆïÊÖÅäËÍʱ¼ä ¶öÁËô¡¢ÃÀÍÅ»ØÓ¦

2020-09-10
中国经济网
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based dispatch and scheduling systems to optimize delivery times. The compression of delivery times by these AI systems has indirectly caused harm to riders by pressuring them to rush, leading to fatigue and traffic violations, which are harms to health and safety. The platforms' responses indicate recognition of the AI system's role and plans to mitigate the harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

ÍâÂôƽ̨ѹËõÆïÊÖÅäËÍʱ¼ä£¿¶öÁËô¡¢ÃÀÍÅ»ØÓ¦½«¸Ä½ø

2020-09-09
中国经济网
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-driven scheduling and dispatch systems to set delivery times and routes. The compression of delivery times by these AI systems has indirectly led to harm by pressuring riders to speed or violate traffic rules, increasing safety risks. The article reports actual harm (traffic violations and safety concerns) linked to the AI system's use. The platforms' responses to adjust the systems confirm the AI system's role in causing these harms. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by AI system use.
Thumbnail Image

外卖平台压缩骑手配送时间?饿了么、美团回应将改进

2020-09-10
中国经济网
Why's our monitor labelling this an incident or hazard?
The delivery platforms employ AI systems for dispatch and scheduling that directly influence rider behavior and delivery timing. The compression of delivery times by these AI systems has indirectly led to harm by pressuring riders to speed and violate traffic laws, posing risks to their health and safety. This fits the definition of an AI Incident, as the AI system's use has indirectly led to harm to a group of people (riders). The article also discusses the platforms' planned improvements to mitigate these harms, but the harm has already occurred, so it is not merely complementary information or a hazard.
Thumbnail Image

½â¾öÍâÂôÒµÀ§¾ÖÐèÕýÊÓÎÊÌâ

2020-09-10
中国经济网
Why's our monitor labelling this an incident or hazard?
The dispatch system described is an AI system that uses big data and algorithms to assign delivery tasks with precise timing. The system's use directly contributes to riders speeding to avoid penalties, which has led to traffic accidents and safety harms. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to persons (riders and possibly others on the road). The article also notes platform efforts to address the issue, but the primary focus is on the harm caused by the AI system's operational pressures.
Thumbnail Image

万亿美团刷屏!外卖骑手成了高危职业?饿了么深夜回应,网友炸了 - 证券时报

2020-09-09
证券时报网_证券时报旗下资讯平台_股票_基金_期货_债券_理财_财经_行情_数据_股吧_博客_论坛
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as algorithmic dispatch and routing systems controlling delivery riders' tasks and timing. These AI systems' use has directly led to harm: delivery riders engaging in risky behaviors to meet algorithmic demands, resulting in injuries and fatalities. The article provides data on rider injuries and traffic violations linked to these AI-driven systems. Therefore, this qualifies as an AI Incident due to direct harm to persons caused by the AI system's use. The introduction of a new feature by Ele.me is a response to the incident and does not change the classification.
Thumbnail Image

“多等5分钟”有用吗

2020-09-11
opinion.dahe.cn
Why's our monitor labelling this an incident or hazard?
The delivery platform's algorithmic system qualifies as an AI system because it schedules and pressures riders to complete orders within tight timeframes. This system's use has directly led to risky behaviors by riders, such as ignoring traffic safety, which constitutes harm to health (a). The article also discusses mitigation measures, but the core issue is the AI system's role in causing harm through its operational pressure. Therefore, this event is an AI Incident.
Thumbnail Image

新民快评|外卖要“速度”,也要有“温情”

2020-09-09
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI algorithms driving delivery assignments that cause pressure on delivery workers, leading to harm such as traffic accidents and health risks, which fits the definition of harm caused indirectly by AI system use. However, it does not describe a specific AI Incident event but rather discusses systemic issues and a platform's response feature. This aligns with Complementary Information, as it provides context and updates on the broader societal and labor implications of AI use in delivery platforms without reporting a new discrete incident or hazard.
Thumbnail Image

“外卖骑手,困在系统里”刷屏!饿了么宣布将上线“多等5分钟”功能,网友吐槽:转移矛盾

2020-09-09
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an algorithmic system driving delivery riders to increase speed and order volume, which leads to dangerous behaviors and occupational hazards. The harm is indirect but clear: the AI system's incentives cause riders to take risks that endanger their health and safety. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to a group of people. The announcement of a new feature to allow customers to wait longer and the platform's response are complementary but do not negate the incident classification.
Thumbnail Image

上海消保委回应饿了么"多等5分钟":逻辑错误

2020-09-09
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
While the platform's delivery system likely involves AI or algorithmic decision-making to optimize order assignments and delivery times, the article does not describe any direct or indirect harm caused by these AI systems. It mainly reports on new features aimed at improving service and the ethical considerations around platform responsibility. There is no indication of injury, rights violations, infrastructure disruption, or other harms linked to AI use. Therefore, this is best classified as Complementary Information, providing context and updates on AI-related platform practices and societal responses without reporting an AI Incident or Hazard.
Thumbnail Image

外卖小哥为抢时间拼命的话题近日上热搜 美团与饿了么分别作出回应

2020-09-11
华声在线
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of intelligent algorithms by food delivery platforms to estimate and compress delivery times, which pressures couriers to engage in risky behaviors like running red lights and speeding. This AI system's use in dispatching and timing directly contributes to harm to the health and safety of delivery workers, fulfilling the criteria for an AI Incident under harm category (a). The platforms' responses acknowledge the problem but do not negate the fact that harm has occurred due to the AI system's operational use. Thus, the event is best classified as an AI Incident.
Thumbnail Image

饿了么将发布新功能 增加"多等5分钟"按钮供用户自主选择

2020-09-10
华声在线
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the form of the delivery platform's algorithmic order management, which influences delivery riders' work conditions. However, the event is about the platform introducing a new feature to allow users to opt to wait longer, which is a governance or policy response to previously reported issues. There is no direct or indirect harm currently caused by the AI system described here, nor is there a plausible future harm from the new feature itself. The main focus is on the platform's response and user choice, which fits the definition of Complementary Information as it provides context and updates related to AI system impacts and responses rather than reporting a new incident or hazard.
Thumbnail Image

三湘时评丨算法无情,人可多一点温暖

2020-09-11
华声在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an algorithm that plans delivery routes for riders, which is an AI system involved in real-time decision-making. The algorithm's outputs have led to unsafe conditions, such as routing that includes illegal or dangerous actions, which directly threatens the health and safety of delivery riders and the public. The platform's failure to correct or mitigate these harms further implicates the AI system's use in causing harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to people and communities. The article also discusses the need for regulatory and social responses, but the primary focus is on the harm caused by the AI system's use, not just complementary information or future risks.
Thumbnail Image

½üÈÕ£¬¡¶ÍâÂôÆïÊÖ£¬À§ÔÚϵͳÀï¡·ÕâÆªÎÄÕÂÒý·¢ÈÈÒ飬ÎÄÕ³ÊÏÖÁËÍâÂôƽ̨Ë㷨ϵͳºÍÆïÊÖʵ¼Ê¹¤×÷µÄÖî¶à³åÍ»£¬Õ¹ÏÖÁËÆïÊÖΪ׼ʱËͲÍÒª¾­ÀúµÄÖî¶à²»Òס£

2020-09-10
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for route planning and delivery time management in food delivery platforms. The AI system's design and use have directly contributed to physical harm risks and actual injuries and deaths among delivery riders, fulfilling the criteria for an AI Incident. The article details realized harms (injuries, fatalities) linked to the AI system's operational pressures and route guidance. The platforms' subsequent algorithmic adjustments are responses to this incident, not the primary event. Therefore, this is classified as an AI Incident due to direct and indirect harm caused by the AI system's use.
Thumbnail Image

ÃÀÍÅͨ¹ý¹Ù·½Î¢ÐÅ»ØÓ¦ÆïÊÖÎÊÌ⣬ÃÀÍųƣ¬“û×öºÃ¾ÍÊÇû×öºÃ£¬Ã»Óнè¿Ú¡£ÏµÍ³µÄÎÊÌ⣬ÖÕ¾¿ÐèҪϵͳ±³ºóµÄÈËÀ´½â¾ö£¬ÎÒÃÇÔðÎÞÅÔ´û¡£

2020-09-09
证券之星
Why's our monitor labelling this an incident or hazard?
The dispatch system is an AI system involved in scheduling and routing delivery riders. The article discusses system problems and the company's commitment to fix them, including measures to mitigate adverse conditions. There is no indication that these system issues have directly or indirectly caused harm such as injury, rights violations, or operational disruption. The focus is on addressing past complaints and improving the system, which qualifies as complementary information rather than an incident or hazard.
Thumbnail Image

独家 我们和几位外卖小哥聊了聊:每天都处在危险中,这两天顾客... 深度178号 853 9分钟前

2020-09-10
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the delivery platform's system algorithm drives riders to accelerate and sometimes violate traffic rules to meet delivery deadlines, causing accidents and injuries. This is a clear example of an AI system's use indirectly leading to harm to people. The harms include physical injury and unsafe working conditions. Hence, this qualifies as an AI Incident under the definition of harm to persons caused directly or indirectly by the use of an AI system.
Thumbnail Image

外卖小哥成“高危职业”?继饿了么之后,美团刚刚回应了…… 财经圈 532 9分钟前

2020-09-09
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithmic and data-driven systems that control delivery times and impact delivery riders' income and job security. These systems qualify as AI systems because they involve data-driven scheduling and performance evaluation. The harm is indirect but real: the system's strict timing pressures cause stress and risk to delivery workers, which can be considered harm to a group of people (health and well-being). The platforms' responses indicate recognition of this harm and efforts to mitigate it. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in managing delivery workers.
Thumbnail Image

困在系统的外卖骑手刷屏背后:平台商家顾客小哥的“四角”难题怎么破 读城记 425 5分钟前

2020-09-11
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-driven algorithms in food delivery platforms that dictate delivery times and order assignments. These algorithms cause delivery riders to rush, sometimes risking traffic accidents and personal safety, which constitutes harm to health and safety (a). The platforms' responses confirm the AI system's role in causing these harms. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in dispatching and timing orders.
Thumbnail Image

独家 热评|外卖骑手,不止受困于系统 时评 3358 6小时前

2020-09-09
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of algorithmic route optimization and data-driven delivery platforms affecting food delivery riders. The harms described (high risk to riders, traffic safety concerns) are ongoing and linked to the use of AI systems in delivery logistics. However, the article does not describe a specific event or incident where AI use directly or indirectly caused harm, but rather a systemic situation and potential future improvements. Therefore, it does not qualify as an AI Incident. It also does not focus on a specific plausible future harm event but discusses general systemic risks and responses, so it is not an AI Hazard. The article mainly provides commentary and contextual information about AI's societal impact and responses, fitting the definition of Complementary Information.
Thumbnail Image

独家 小时记者一路陪同,外卖小哥说出心里话:送达时间不是关键... 深度178号 1167 13分钟前

2020-09-10
thehour.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the delivery riders are driven by system algorithms to accelerate order completion, which causes them to engage in unsafe behaviors resulting in accidents and injuries. The AI system's role in pushing riders to speed and violate traffic rules is a contributing factor to these harms. The harms are realized (accidents, injuries), not just potential. Hence, this qualifies as an AI Incident due to indirect harm to health and safety caused by the AI system's use and pressure.
Thumbnail Image

[Ïêϸ]

2020-09-10
tianjinwe.enorth.com.cn
Why's our monitor labelling this an incident or hazard?
The delivery platform's algorithm is an AI system that schedules delivery times. Its use has indirectly caused harm by pressuring riders to violate traffic rules and work under unsafe conditions, constituting harm to health and safety (a). The platform's response does not remove the harm but shifts responsibility to consumers, which is a governance and ethical issue but does not negate the harm caused. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

起底“骑手”生意:野蛮生

2020-09-10
cb.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions system algorithms and data used by delivery platforms to manage riders, which qualifies as AI system involvement. The harms discussed include traffic accidents and labor risks, which are linked to the AI system's use. However, no specific incident of harm caused by AI is reported, nor is there a clear event of plausible future harm described. The focus is on systemic issues, platform labor dynamics, and ongoing discussions about algorithmic management and its consequences. This fits the definition of Complementary Information, as it provides supporting context and understanding of AI impacts on labor without describing a discrete AI Incident or Hazard.
Thumbnail Image

饿了么将增加“多等5分钟”按钮供用户自主选择

2020-09-09
xxsb.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system managing food delivery logistics and rider performance through algorithms. The new features respond to concerns about the system's pressure on riders and provide consumers with options to mitigate delivery time stress. However, there is no direct or indirect harm reported as occurring from the AI system's use; rather, the article discusses a response to existing issues and a new feature rollout. Therefore, this is Complementary Information as it provides context and updates on societal and governance responses to AI system impacts in the delivery sector, without describing a new AI Incident or AI Hazard.
Thumbnail Image

谁把外卖骑手“困在系统里”?美团、饿了么齐回应

2020-09-10
iceo.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the delivery platforms' system algorithms and data drive the 'on-time delivery' rule, which pressures riders to violate traffic laws and results in injuries and deaths. This is a direct harm to the health of a group of people caused by the AI system's use in dispatching and scheduling. The harm is materialized and documented with accident data. The platforms' subsequent announcements are responses to this incident, not the primary event. Hence, the event meets the criteria for an AI Incident involving harm to people caused by the use of AI systems.
Thumbnail Image

超八成网友支持不着急时点外卖多等5分钟

2020-09-09
环球网
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system or algorithmic system that dynamically calculates delivery times considering various factors such as weather, traffic, and rider workload. This system influences delivery time estimates and rider scheduling, which directly relates to rider safety and delivery efficiency. However, the event describes a proactive measure to reduce harm (rider accidents or unsafe riding) by adjusting delivery time expectations and providing incentives, rather than an incident or harm occurring. There is no indication of realized harm or malfunction causing injury or rights violations. Instead, it is a governance and safety improvement measure responding to known risks. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and technical responses to AI-related delivery time management and rider safety issues.
Thumbnail Image

算法,到底是不是"困"住外卖骑手的真凶?

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (e.g., intelligent dispatching, ETA prediction, dynamic pricing) used by food delivery platforms that directly impact riders' working conditions and safety. The AI systems' use leads to pressure on riders to meet tight deadlines, which has resulted in frequent accidents and unsafe working conditions, fulfilling the criteria for an AI Incident due to harm to health and safety. The harm is realized and directly linked to the AI system's operational use, not merely a potential risk or general commentary, so it is classified as an AI Incident.
Thumbnail Image

外卖平台推"多等X分钟"引争议 效率与安全难题该如何破解?

2020-09-11
Sina
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of delivery platform algorithms that optimize delivery times and performance metrics. These algorithms indirectly contribute to harm by pressuring delivery workers to engage in unsafe behaviors, leading to increased traffic accidents (harm to health). The article reports on realized harm (accidents) linked to the use of these AI systems and discusses responses and proposed solutions. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to harm to people (delivery workers).
Thumbnail Image

一个"被困在系统里的美团骑手":生活所迫想多挣点钱

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system—the delivery platform's algorithmic order dispatch and timing system—that directly influences riders' behavior and working conditions. The system's strict time constraints and penalty mechanisms lead riders to take dangerous risks, resulting in injuries and deaths, which are harms to health and safety. The harm is indirect but clearly linked to the AI system's use and operational design. Hence, this qualifies as an AI Incident under the definition of harm to health caused indirectly by the AI system's use.
Thumbnail Image

骑手文章刷屏,美团称暂不回应,饿了么回应了......

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the food delivery platform's order and rider management. However, it does not report any incident or hazard involving harm or plausible harm caused by AI. Instead, it reports a new feature designed to improve rider conditions and customer experience, which is a societal and governance response to prior concerns. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

央视评论:外卖小哥拼命、谁"饿"了?"美"了谁?(视频)

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
While the article mentions system algorithms that compress delivery times, which implies the use of AI or algorithmic decision-making, it does not report any direct or indirect harm caused by the AI system itself, nor does it describe a specific event where AI led to injury, rights violations, or other harms. The discussion centers on societal responses and opinions about labor conditions and platform policies, without detailing an AI incident or hazard. Therefore, this is best classified as Complementary Information, providing context and societal response to AI-driven labor management systems.
Thumbnail Image

困在系统里的 何止外卖小哥?

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (algorithmic delivery time management) that controls delivery workers' schedules and enforces penalties and rewards, causing them to speed and violate traffic rules, resulting in frequent injuries and dangerous conditions. This is a direct link between AI system use and harm to health and safety, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in creating these conditions.
Thumbnail Image

外卖骑手"困"在系统里 亚马逊快递员的手机"捆"在树上

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—the Amazon Flex app uses AI-driven location and dispatch algorithms to assign delivery tasks. The drivers' practice of hanging phones to spoof their location is a misuse of this AI system. While this manipulation violates company policies and creates unfair competition, the article does not report any realized harm such as injury, legal rights violations, or property damage. The potential for harm exists, including unfair labor conditions and possible systemic exploitation, but these are not confirmed incidents. Therefore, the event fits the definition of an AI Hazard, as the misuse of the AI system could plausibly lead to significant harms if unchecked. It is not Complementary Information because the main focus is on the misuse and its implications, not on responses or updates to prior incidents. It is not Unrelated because the AI system is central to the event.
Thumbnail Image

谁把外卖骑手困在系统里?复旦教授:算法没错 错的是效率评价机制

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article clearly describes how AI-driven algorithms used by food delivery platforms set delivery time expectations and penalties that pressure riders to engage in risky behaviors, leading to increased accidents and injuries. The AI system's role in shaping these harmful outcomes is direct and significant, as the algorithmic efficiency metrics constrain rider behavior and contribute to unsafe conditions. Although the professor argues the algorithm itself is not 'wrong,' the system's design and use of AI outputs have caused real harm. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to people and communities.
Thumbnail Image

饿了么的5分钟没诚意,美团的8分钟太鸡贼

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (delivery dispatch and evaluation algorithms) that control delivery times and penalize couriers for delays, leading to unsafe working conditions and labor exploitation. These systems' use has directly led to harm to the health and safety of couriers (harm category a) and labor rights violations (harm category c). The article critiques platform responses but confirms the presence of these harms. Therefore, the event is an AI Incident due to realized harm caused by AI system use.
Thumbnail Image

饿了么凌晨发公告,多等5分钟新功能引热议

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The platform's delivery time management system involves AI to estimate and enforce delivery times, which affects courier behavior and safety. The new feature is a response to concerns about harm (couriers risking safety to avoid penalties) but does not itself report any realized harm or direct malfunction of the AI system. There is no indication that the AI system caused injury, rights violations, or other harms, nor that the feature introduces a credible risk of future harm. Instead, it is a governance and operational update aimed at improving conditions, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

饿了么写了个初稿,美团编辑编辑发了

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly references a 'system' controlling delivery times and rider performance, which can be reasonably inferred as an AI or algorithmic system managing logistics and worker evaluation. The system's design and use directly cause harm by enforcing unrealistic delivery times, leading to unsafe behaviors (e.g., speeding) and economic penalties for riders, which are labor rights violations and health harms. The platforms' responses do not mitigate these harms but rather shift responsibility, confirming the ongoing impact. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in labor management.
Thumbnail Image

外卖骑手成高危职业!饿了么将推出多等5分钟功能

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the delivery platform's algorithm and data-driven system) that indirectly leads to harm (increased traffic accidents and risk to delivery riders' health and safety). The system's use causes the harm through its operational pressure on riders. Therefore, this qualifies as an AI Incident. The announcement of the new feature is a complementary response but the main event is the harm caused by the AI system's use.
Thumbnail Image

饿了么:希望把选择权交给用户 骑手端看不到增加时长

2020-09-10
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article discusses the development and planned use of a new feature in the Ele.me delivery platform that involves system changes affecting delivery time management. While AI systems are likely involved in the platform's order and delivery time management, the article does not report any realized harm or direct incidents caused by AI malfunction or misuse. Instead, it focuses on the platform's response and planned improvements to address rider welfare concerns. Therefore, this is complementary information about ongoing AI system use and governance responses rather than an incident or hazard.
Thumbnail Image

外卖员们怎么看?关于系统困境,与多等5分钟

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through the platform's delivery time algorithms and dispatch system, which affect delivery workers' conditions. The new feature allowing users to add extra wait time is a platform response to systemic issues, reflecting governance and operational adjustments. There is no indication of actual harm caused by AI malfunction or misuse, nor a credible risk of future harm described. The discussion centers on improving the system and user experience, with delivery workers' opinions and suggestions included. This fits the definition of Complementary Information as it updates on societal and governance responses to AI-related operational challenges without reporting an AI Incident or AI Hazard.
Thumbnail Image

困住骑手的系统背后,站着什么人

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based real-time delivery optimization systems that shorten delivery times, which directly pressures riders into unsafe practices causing harm risks. The AI system's role is pivotal in creating these harms through its design and use. The harms include injury risk to riders (harm to health) and broader social safety concerns. The platforms' subsequent mitigation efforts confirm recognition of these harms. Hence, this is an AI Incident as the AI system's use has directly led to significant harm risks.
Thumbnail Image

饿了么:5分钟;美团:8分钟!谁能化解外卖竞速之困?

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-based dispatch algorithms that assign delivery tasks with tight time constraints, causing delivery riders to engage in risky behaviors that have resulted in injuries and fatalities. The AI system's use and its operational design directly contribute to harm to persons (riders). The article provides data on rider injuries and fatalities linked to these pressures, confirming realized harm. Platform responses aim to mitigate these harms but do not negate the fact that the AI system's use has already led to significant harm. Therefore, this event meets the criteria for an AI Incident due to indirect harm to health caused by AI system use.
Thumbnail Image

被困在事故里的外卖骑手:配送中遭遇意外是不是工伤、谁来负责?

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly references the algorithmic systems used by food delivery platforms to assign orders and manage rider performance, which qualifies as AI system involvement. The harms discussed include injury risks to riders and third parties due to the pressure created by these AI systems to deliver faster, as well as labor rights violations due to unclear employment relationships and lack of insurance. However, the article does not describe a particular AI malfunction or misuse event that directly caused harm; rather, it highlights systemic issues and potential for harm inherent in the AI-driven delivery system. This fits the definition of an AI Hazard, where the AI system's use plausibly leads to harm, but no specific incident is reported. The article also includes legal and social responses, but these serve as context rather than the main focus, so it is not Complementary Information. It is not unrelated because AI systems are central to the described issues.
Thumbnail Image

外卖骑手被困,饿了么给出错误答案

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of the platform's algorithmic system in pushing delivery riders to work under unsafe conditions, causing harm to their health and safety. The system's use of data and algorithms to maximize order throughput is a clear example of an AI system influencing real-world outcomes. The harm is indirect but real, as the system's design and operation create unsafe working pressures. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people. The article also discusses the platform's response, but the primary focus is on the harm caused by the AI system's operation, not just the response or complementary information.
Thumbnail Image

系统之外,400万外卖骑手在这个社区吹水吐槽

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through the delivery dispatch and location systems used by riders, which are part of the platform's AI infrastructure. The riders discuss system bugs and penalties related to delivery timing, indicating AI system involvement in their work. However, no direct or indirect harm caused by AI is reported, nor is there a credible risk of future harm described. The main focus is on the riders' community, their shared experiences, and Meituan's responses to rider stress, which enriches understanding of the AI ecosystem's social context. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

谁把外卖骑手"困在系统里"?

2020-09-09
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of platform algorithms and data-driven systems that dictate delivery timing and rider performance, which are AI systems by definition. These systems have directly led to unsafe working conditions and pressure on riders, causing harm risks (injury from traffic violations) and labor rights issues. The platforms' responses to modify the algorithms and add flexibility confirm the AI system's central role. The harm is realized or ongoing, not just potential, so this is an AI Incident rather than a hazard or complementary information. The article does not merely report on AI features or governance responses but details the harm caused by AI system use in the delivery platforms.
Thumbnail Image

"多等5分钟"能否化解外卖配送竞速之困

2020-09-09
新浪财经
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of delivery scheduling and dispatch algorithms that influence delivery time limits and rider assignments. The discussion centers on the platforms' introduction of features to alleviate time pressure and improve safety, which addresses known issues but does not describe a new harm event or a credible risk of harm that has not yet materialized. There is no report of injury, rights violation, or other harms directly or indirectly caused by AI system malfunction or misuse. Instead, the article reports on platform policy changes and public discourse about delivery time pressures and safety, which fits the definition of Complementary Information as it updates on societal and governance responses to AI-related operational challenges.
Thumbnail Image

外卖骑手成了高危职业?美团暂未回应 饿了么深夜回应惹争议

2020-09-09
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of platform algorithms that control order dispatch and delivery timing, which are AI systems managing complex scheduling and routing tasks. These algorithms' outputs (e.g., short delivery times, penalties for delays) directly influence rider behavior, leading to risky actions and accidents. The harm to riders' health and safety is documented with statistics on injuries and fatalities, showing realized harm. Therefore, this event meets the criteria for an AI Incident because the AI system's use has directly and indirectly led to injury and harm to a group of people (delivery riders).
Thumbnail Image

美团回应骑手事件:调度系统会给骑手留出8分钟弹性时间

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The dispatch system is an AI system that influences rider behavior and timing, contributing indirectly to harm (traffic accidents and occupational hazards). The article does not report a new incident but rather Meituan's official response and planned mitigation measures. This fits the definition of Complementary Information, as it updates on remediation and governance responses to an existing AI Incident rather than describing a new incident or hazard.
Thumbnail Image

困扰外卖骑手的"系统",无人能置身事外

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (algorithmic dispatch and delivery optimization) and discusses their use and social impact on delivery workers. However, it does not describe any direct or indirect harm (injury, rights violation, disruption, or other significant harm) caused by the AI system. Nor does it describe a credible risk of future harm that would qualify as an AI Hazard. Instead, it focuses on the systemic effects, societal context, and a platform's response to concerns, which fits the definition of Complementary Information. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

最新!美团也回应了

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as the platform's dispatch and routing algorithm that influences delivery assignments and routes. The system's outputs have directly led to harm by pressuring riders into unsafe behaviors and stressful conditions, which is a clear harm to health and safety (a). Meituan's response confirms the system's role and the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm to people (the delivery riders).
Thumbnail Image

美团、饿了么,你凭什么让我多等几分钟?

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of algorithmic dispatch and routing systems by food delivery platforms (Meituan and Ele.me) that optimize delivery times aggressively. These AI systems' outputs directly influence riders' behavior and working conditions, leading to a high incidence of injuries and fatalities, which is harm to people. The harm is indirect but clearly linked to the AI system's use and design. The article also discusses the platforms' responses, which are attempts to mitigate the harm but do not negate the fact that harm has occurred. Thus, the event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use in real-world operations affecting rider safety and social order.
Thumbnail Image

外卖骑手的困境,不只是算法与新职业要解决的难题

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithmic systems used by food delivery platforms to manage and optimize delivery times and order assignments. These AI systems directly influence the working conditions of delivery riders, creating a high-risk environment described as a 'race with death.' The harm includes physical danger and psychological stress, which fits the definition of injury or harm to a group of people caused indirectly by AI system use. The article also mentions platform measures responding to these issues, but the core problem remains the algorithmic pressure causing harm. Thus, this qualifies as an AI Incident due to the realized harm linked to AI system use in the delivery platforms.
Thumbnail Image

5分钟解决不了的,8分钟也不行

2020-09-10
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI algorithms for dispatching and routing delivery riders, which directly leads to increased work pressure and safety risks for the riders, constituting harm to their health and well-being. The harm is realized and ongoing, not merely potential. The platforms' subsequent mitigation measures confirm recognition of the harm caused. Thus, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm to a group of people (delivery riders).
Thumbnail Image

外卖骑手被算法逼成高危群体 多等5分钟是解决之道吗?

2020-09-09
Sina
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI algorithms in food delivery platforms that directly influence delivery time expectations and rider behavior. This use has indirectly led to harm by pressuring riders to take unsafe risks, such as rushing and potentially violating traffic laws, which threatens their health and safety. The article describes realized harm to a group of people (delivery riders) due to the AI system's operational use, fitting the definition of an AI Incident. The discussion of platform responses and calls for regulation are complementary but do not negate the incident classification.
Thumbnail Image

侠客岛:困在系统里的,不止外卖小哥

2020-09-10
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms and systems that optimize delivery routes and times, which can be reasonably inferred as AI systems influencing work conditions. The harms discussed include physical risks (accidents, labor strain) and psychological stress due to system-driven pressures. However, these harms are described in a general, systemic, and ongoing manner rather than as a discrete event or incident. There is no new specific AI Incident or AI Hazard described; rather, the article offers a broader societal reflection on AI's role in labor and life, fitting the definition of Complementary Information as it enhances understanding of AI's societal impacts without reporting a new incident or hazard.
Thumbnail Image

外卖平台推多等X分钟引争议 效率与安全难题该如何破解?

2020-09-11
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of delivery platform algorithms that set strict delivery time targets. These algorithms indirectly contribute to harm by pressuring delivery workers to engage in unsafe behaviors, resulting in increased accidents and injuries. The harm is realized and ongoing, making this an AI Incident. The article also discusses responses and potential improvements, but the primary focus is on the harm caused by the AI system's use in the delivery platforms.
Thumbnail Image

快快评 | 饿了么“多等5分钟”的逻辑错在哪

2020-09-10
xdkb.net
Why's our monitor labelling this an incident or hazard?
The platform's algorithmic management of delivery riders is an AI system involvement, as it uses algorithms to evaluate rider performance and delivery times. The article critiques the social consequences of these AI-driven policies but does not report any realized harm or incident caused by the AI system. Nor does it describe a plausible future harm event. Instead, it provides a critical perspective on the platform's use of AI and its impact on stakeholders, which fits the definition of Complementary Information. There is no direct or indirect harm caused by the AI system reported, so it is not an AI Incident or AI Hazard.
Thumbnail Image

美团回应骑手安全问题:系统的问题终究需要系统背后的人来解决

2020-09-09
网易
Why's our monitor labelling this an incident or hazard?
The dispatch and routing system is an AI system that directly influences delivery riders' schedules and routes. The article details how the system's algorithmic decisions have led to unsafe conditions, such as riders violating traffic laws to avoid penalties, which constitutes harm to health and safety. The company's acknowledgment and planned system optimizations confirm the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

侠客岛:困在系统里的 不止外卖小哥

2020-09-10
网易
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms and systems that optimize delivery routes and times, which can be reasonably inferred to involve AI systems. The harms discussed (e.g., accidents, labor exploitation) are linked indirectly to the use of these AI systems. However, the article does not describe a particular event or incident where AI directly or indirectly caused harm, nor does it present a plausible future harm scenario beyond the general systemic critique. It is primarily a societal and philosophical reflection on the role of AI and technology in labor conditions, without reporting a new incident or hazard. Therefore, it fits best as Complementary Information, providing context and critical insight into AI's societal impacts rather than documenting a specific AI Incident or Hazard.
Thumbnail Image

饿了么:将推出"多等5分钟/10分钟"新功能

2020-09-09
网易科技
Why's our monitor labelling this an incident or hazard?
The event involves an AI system insofar as the delivery platform uses algorithmic dispatch and timing systems that impact delivery riders' work conditions. The article references the systemic pressures created by these algorithms, which have contributed to the high-risk nature of delivery work. However, no specific AI-related harm incident is described, nor is there a clear imminent risk of harm detailed. The company's announcement of a new feature and incentive mechanism is a response to these systemic issues, aiming to mitigate risks. Therefore, this is best classified as Complementary Information, providing context and updates on governance and societal responses to AI-driven platform work challenges.
Thumbnail Image

马亮:如何破解外卖骑手的"生死劫"

2020-09-09
网易
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the presence and use of a strong algorithmic system (an AI system) by food delivery platforms to monitor and control delivery riders' behavior. This AI system's outputs and management practices have directly pressured riders into dangerous behaviors that risk their health and safety, fulfilling the criteria for harm to persons. The harm is not speculative but ongoing and systemic, as riders repeatedly face life-threatening situations due to the AI-driven operational demands. Hence, this is an AI Incident involving harm to health and safety caused by the AI system's use.
Thumbnail Image

美团推出5项措施改进配送系统 给骑手留出8分钟弹性时间

2020-09-09
网易
Why's our monitor labelling this an incident or hazard?
The dispatch system uses AI algorithms to assign delivery tasks and optimize routes, which has indirectly led to harm by pressuring riders into unsafe behaviors and contributing to accidents and traffic violations. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to people (riders). However, the article mainly reports Meituan's response measures to address these harms rather than describing a new or ongoing incident. Since the harms have already been documented and the article focuses on mitigation efforts, this is best classified as Complementary Information, providing updates on responses to a previously known AI Incident.
Thumbnail Image

美团也回应了:8分钟

2020-09-09
网易新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic dispatch and delivery time calculation) used by food delivery platforms that have directly contributed to increased traffic accidents and unsafe conditions for delivery riders, constituting harm to health and safety. The harm is realized and ongoing, not just potential. The platforms' responses to add time buffers and optimize algorithms are complementary but do not negate the incident classification. The involvement of AI in causing harm to a vulnerable worker group fits the definition of an AI Incident under harm category (a) injury or harm to health of a group of people.
Thumbnail Image

外卖骑手安危:市场的痛点 企业的创新沸点

2020-09-11
网易
Why's our monitor labelling this an incident or hazard?
The article explicitly references algorithmic systems and incentive mechanisms used by delivery platforms, which can be reasonably inferred to involve AI systems managing order assignments, routing, and performance metrics. The discussion centers on the systemic challenges and potential improvements rather than a specific harmful event caused by AI. There is no report of realized harm or a near-miss incident directly linked to AI malfunction or misuse. The focus is on market pain points and the need for innovation, making this a case of complementary information that enhances understanding of AI's role in delivery platforms and the societal implications, rather than an AI Incident or Hazard.
Thumbnail Image

涉饿了么网络投票疑遭干预 白岩松:神秘力量姓"水"?

2020-09-10
网易
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it references platform algorithms affecting delivery time settings and user voting outcomes. However, there is no clear evidence of realized harm such as injury, rights violations, or disruption caused by these AI systems. The article mainly reports on concerns, public reactions, and platform explanations without describing an incident or hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI-related societal and governance issues rather than reporting an AI Incident or Hazard.
Thumbnail Image

外卖骑手刷屏:算法霸权下的平台责任

2020-09-10
网易
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (algorithm) used by food delivery platforms to manage and optimize delivery times. This algorithm's use directly leads to harmful outcomes for delivery riders, including increased traffic violations and risk of injury, which qualifies as harm to health and labor rights violations. The algorithm's role is pivotal in driving these harms through its design and enforcement mechanisms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

资本“一半是天使,一半是恶魔”,既可为善,也可为恶。企业自律是基石,也需要媒体、监管部门、法律的外部约束。 中美外卖平台面临的问题有同有异,对互联网平台的监管,在每个国家都是一个比较新的仍有待研究解...

2020-09-09
zhou-qiong.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use of algorithms controlling delivery riders, which qualifies as AI systems managing work and logistics. The harms include labor exploitation, violation of traffic laws leading to potential physical harm, and systemic social issues affecting workers and communities. These harms have materialized and are linked directly or indirectly to the AI systems' use. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-driven platform management and its societal impact.
Thumbnail Image

  在刚刚过去的8月,120急救变得非常匆忙。据上海市医疗急救中心表示,8月中心城区急救车日均出车1300次左右,比第二季度增长14.88%。而在这一组数字背后,120急救医生重点谈到外卖骑手这一群体的安全问题,“每周...

2020-09-09
kangsitanding.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the AI-driven algorithmic systems used by delivery platforms assign orders and optimize routes in a way that pressures delivery riders to speed and take risks, resulting in a high incidence of traffic accidents and injuries. This constitutes direct harm to a group of people caused by the use of an AI system. The AI system's role is pivotal in creating unsafe working conditions, leading to injury and death, which fits the definition of an AI Incident under harm to health of persons. Therefore, this event is classified as an AI Incident.
Thumbnail Image

  热搜天天变,如今再度轮到外卖“制霸全场”。近日,《外卖骑手,困在系统里》在全网刷屏。这篇文章通过多角度分析,强调在算法的驱动下,外卖骑手疲于奔命,导致他们违反交通规则、与死神赛跑。“外卖行业如何成...

2020-09-10
kangsitanding.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The delivery platforms' algorithms are AI systems that manage order dispatch and delivery timing. Their use has directly contributed to delivery riders engaging in risky behavior, such as violating traffic rules and racing against time, which harms their health and safety. The article highlights this harm and the systemic nature of the problem caused by the AI-driven system. Although the platforms attempt to mitigate the issue with a 'wait a few minutes' feature, this does not address the root cause and is criticized as ineffective. Given the realized harm to riders' health and safety caused by the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

走出系统之困|专访郑广怀:核心是劳资问题,算法应以人为本

2020-09-12
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI algorithms used by delivery platforms as labor management systems that impact workers' rights and conditions. This involves AI system use and its social consequences. However, no specific event of harm caused by AI is described; rather, the article analyzes systemic issues and advocates for reforms. It also mentions policy responses and legal considerations, which are typical of Complementary Information. There is no direct or indirect harm event reported, nor a plausible future harm event described as imminent or credible in the article. Hence, it does not meet the criteria for AI Incident or AI Hazard but enriches understanding of AI's societal impact and governance challenges.
Thumbnail Image

外卖平台推 "多等 X 分钟" 引争议 效率与安全难题该如何破解?

2020-09-11
爱范儿
Why's our monitor labelling this an incident or hazard?
The event involves algorithmic management by food delivery platforms, which likely use AI systems to optimize delivery times and performance metrics. The safety issues faced by delivery workers stem from the use of these AI-driven algorithms imposing efficiency pressures. However, the article focuses on the controversy and the platforms' responses rather than a specific incident of harm caused by AI or a direct hazard. Therefore, this is complementary information about societal and governance responses to AI-related labor issues, not a direct AI incident or hazard.
Thumbnail Image

你们的讨伐比外卖还快

2020-09-09
Baidu.com
Why's our monitor labelling this an incident or hazard?
While the article references algorithmic management and the pressures it creates for delivery workers, it does not report a concrete incident of harm caused by an AI system, nor does it identify a specific AI hazard event. The discussion is more about systemic social issues and moral reflections rather than a particular AI Incident or Hazard. Therefore, it is best classified as Complementary Information, providing context and societal response to AI-driven platform work conditions without detailing a specific incident or hazard.
Thumbnail Image

China's Meituan and Ele.me tackle backlash against demands on couriers

2020-09-10
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Meituan and Ele.me use algorithms to set delivery time limits, which directly influence courier behavior and have contributed to delivery-related accidents and driver penalties. These algorithms qualify as AI systems because they perform complex real-time decision-making to optimize delivery efficiency. The harms include injury risks to couriers (harm to health) and economic harm due to penalties. Since these harms have occurred and are directly linked to the AI systems' use, this event qualifies as an AI Incident. The companies' adjustments to the algorithms are responses to these harms but do not negate the incident classification.
Thumbnail Image

Driven by Algorithms, Food Giants' Delivery Riders Win Small Reprieve

2020-09-10
caixinglobal.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered algorithms that autonomously generate delivery plans and optimize order distribution, which directly influence riders' behavior and safety. The resulting harm includes physical injury and death from road accidents caused by riders rushing to meet AI-imposed deadlines. This constitutes direct harm to persons caused by the use of AI systems, meeting the definition of an AI Incident. The article also discusses company responses, but these do not negate the realized harm caused by the AI systems' operation.
Thumbnail Image

2020-09-15
新华网
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and ethical implications of algorithmic management in food delivery platforms, discussing potential harms and calling for regulatory and systemic changes. It does not describe a specific AI Incident where harm has already occurred, nor does it present a particular AI Hazard event with imminent risk. Instead, it provides a broad analysis and advocacy for improvements and oversight, which fits the definition of Complementary Information as it enhances understanding of AI's societal impacts and governance needs without reporting a new incident or hazard.
Thumbnail Image

2020-09-14
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithmic systems (AI-based) in setting delivery time expectations and performance metrics that pressure delivery riders to speed and take risks, resulting in a high incidence of traffic accidents. This constitutes indirect harm caused by the AI system's use. The harm is realized and ongoing, meeting the criteria for an AI Incident under the framework, specifically harm to health and safety of a group of people. The article also discusses systemic issues and responses but the primary focus is on the harm caused by the AI system's use.
Thumbnail Image

薛军:数字时代需要高度关注算法规制

2020-09-16
china.org.cn/china.com.cn(中国网)
Why's our monitor labelling this an incident or hazard?
The AI system (real-time delivery algorithm using AI and deep learning) is explicitly mentioned and is directly involved in setting delivery time requirements that pressure riders into unsafe practices, causing harm to their health and safety (harm to persons). This constitutes an AI Incident because the AI system's use has directly led to harm. The article also discusses the broader societal and legal implications of algorithmic decision-making, emphasizing the need for regulation and responsibility, which supports the classification as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

外卖站长,系统里的"修理工" - cnBeta.COM 移动版

2020-09-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an intelligent dispatch system (an AI system) for assigning delivery orders to riders and acknowledges its technical limitations. It describes how human site managers act as a corrective mechanism when the AI system's scheduling is suboptimal. While the system's limitations cause operational challenges and pressure on riders, there is no direct or indirect harm reported such as injury, rights violations, or significant community harm. The article focuses on describing the system's functioning, human roles, and business strategies to mitigate issues, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

50分钟违规6次!央视外卖骑手生存调查:拼命快了,快乐了谁? - cnBeta.COM 移动版

2020-09-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI systems to manage order dispatch, route optimization, and delivery timing, which directly influence riders' behavior. The strict delivery time constraints and penalty mechanisms, driven by AI algorithms, have led to riders repeatedly violating traffic laws, resulting in numerous traffic accidents and injuries, as documented in the article. This constitutes harm to the health of a group of people (the riders), fulfilling the criteria for an AI Incident. The article provides concrete examples of harm and links it to the AI systems' use, not merely potential or hypothetical risks. The platforms' partial responses do not negate the realized harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

外卖小哥成交通事故高发群体 "合理"的算法为何"失控"? - 警告! - cnBeta.COM

2020-09-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the platforms use AI-based algorithms and big data to compress delivery times and enforce strict performance metrics, which pressure delivery riders to speed and violate traffic laws, resulting in traffic accidents and injuries. This constitutes indirect harm caused by the AI system's use. The harm is materialized and ongoing, meeting the criteria for an AI Incident. The article also discusses responses and potential improvements but the main focus is on the harm caused by the AI system's use and its consequences.
Thumbnail Image

ƽ̨ÔðÈÎ×î´ó£¿ÈçºÎ¸ÄÉÆÆïÊÖϵͳ֮À§

2020-09-14
人民网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as real-time intelligent delivery algorithms that influence delivery riders' behavior and working conditions. The use of these AI systems has indirectly led to harm to people (delivery riders) through increased accidents and injuries, fulfilling the criteria for an AI Incident. The article details actual harm (injuries and deaths) linked to the AI system's use, not just potential harm, and discusses responses to mitigate these harms, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

饿了么、美团相继回应外卖骑手生存状态

2020-09-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The AI systems involved are the algorithmic dispatch and scheduling systems that directly influence delivery riders' work pace and safety. The reported harms include frequent traffic accidents and the classification of delivery riding as a high-risk occupation, which are direct harms to the health and safety of a group of people caused by the AI system's use. Therefore, this qualifies as an AI Incident. The platforms' announcements are complementary information about mitigation but do not negate the incident classification.
Thumbnail Image

外卖平台的锅,不要扔给算法 | 超级观点_详细解读_最新资讯_热点事件_36氪

2020-09-15
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and operational challenges related to AI algorithms used in food delivery platforms but does not describe a concrete AI Incident or an immediate AI Hazard. It mainly offers perspectives, critiques, and potential solutions regarding algorithmic optimization and its effects on delivery riders. The discussion of platform responses, rider experiences, and algorithmic impacts fits the definition of Complementary Information, as it enhances understanding of AI's role and the ecosystem's dynamics without reporting a specific harmful event or a direct plausible future harm. Therefore, the classification as Complementary Information is appropriate.
Thumbnail Image

张菁:外卖平台算法的背后 - 大纪元

2020-09-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI algorithms in food delivery platforms that set strict delivery time targets, which delivery workers must meet to avoid penalties. These algorithms do not account for real-world factors like waiting times or safety, resulting in delivery workers being pressured to take risks, effectively causing harm to their health and safety. This is a direct harm caused by the AI system's use, fitting the definition of an AI Incident involving harm to a group of people. The article also highlights the ethical and human rights concerns related to this AI use, reinforcing the classification as an AI Incident.
Thumbnail Image

文化风向标(9.7-9.13)|你愿意多等外卖5分钟吗?

2020-09-14
cci.ifeng.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (advanced algorithms) used by food delivery platforms to manage and pressure delivery workers, leading to hazardous conditions. The harm is realized as workers face increased risk due to the AI-driven system's demands. The platform's response to offer a consumer option to wait longer is a mitigation attempt but does not negate the harm caused by the AI system's use. Therefore, this qualifies as an AI Incident due to direct harm to a group of people (delivery workers) caused by the AI system's use.
Thumbnail Image

隐藏在外卖、信息流、电商里的算法 到底有没有价值观?

2020-09-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (algorithms in delivery platforms, recommendation systems, and e-commerce) and discusses their societal effects, including labor exploitation, algorithmic bias, and user manipulation. However, it does not describe a particular AI Incident causing direct or indirect harm, nor does it report a specific AI Hazard event with plausible future harm. Instead, it provides a broad, reflective overview and critique of AI's role in society, including responses from companies and experts. This aligns with the definition of Complementary Information, which includes societal and governance responses, ongoing assessments, and contextual understanding of AI impacts. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

外卖江湖“时间折叠”何以发生?

2020-09-15
南方网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (delivery route and time prediction algorithms) and discusses their use and impact on delivery riders. While it highlights potential harms such as pressure on riders leading to unsafe behavior, no concrete harm or incident is reported as having occurred. The article mainly raises concerns about the plausible future harms and systemic issues related to algorithmic control and labor conditions. It also includes discussion of societal and governance responses, such as calls for regulation and platform adjustments. Therefore, the event fits best as Complementary Information, providing context, analysis, and responses related to AI systems and their societal impact, rather than reporting a specific AI Incident or AI Hazard.
Thumbnail Image

谁来帮外卖骑手审查算法伦理

2020-09-12
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithms driving delivery platforms and other digital services) that have directly led to harm: delivery workers facing dangerous conditions due to compressed delivery times enforced by AI, and elderly people being excluded from services due to reliance on digital health codes. These are clear examples of AI Incidents causing harm to people and communities. The discussion of algorithmic opacity, bias, and unfairness further supports the classification as an AI Incident rather than a mere hazard or complementary information. The article is not just about potential future harm or responses but documents ongoing harms caused by AI systems in operation.
Thumbnail Image

外卖小哥成交通事故高发群体!"合理"的算法为何"失控"?

2020-09-15
news.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of AI-based algorithms in setting delivery time expectations and performance assessments that pressure delivery riders to speed and take risks, resulting in a high incidence of traffic accidents and injuries. This constitutes indirect harm caused by the AI system's use. The harm is realized and ongoing, meeting the criteria for an AI Incident. The article also discusses systemic issues and responses but the primary focus is on the harm caused by the AI system's use in the delivery platforms.
Thumbnail Image

外卖骑手何处去?破解零工经济困局的三条出路

2020-09-12
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithms) used by delivery platforms to manage and control gig workers, which directly leads to harm in the form of labor rights violations and exploitation. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human and labor rights (harm category c). The discussion of legal and social responses is complementary information but the core event is the realized harm caused by AI-driven platform algorithms. Therefore, the event is classified as an AI Incident.
Thumbnail Image

ÍâÂôƽ̨µÄ¹ø£¬²»ÒªÈÓ¸øËã·¨ | ³¬¼¶¹Ûµã

2020-09-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (algorithms used by food delivery platforms for order dispatch and route optimization) and discusses their impact on delivery workers and platform stakeholders. However, it does not describe a concrete AI Incident where harm has occurred, nor does it present a new AI Hazard with plausible future harm. Instead, it offers a detailed commentary and analysis of existing systemic challenges, platform responses, and societal implications. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI's role in this ecosystem without reporting a new incident or hazard.
Thumbnail Image

客户多给的5分钟给了谁?

2020-09-14
hi.online.sh.cn
Why's our monitor labelling this an incident or hazard?
The dispatch system that schedules delivery times with extreme precision and pressure on riders is an AI system influencing real-time decisions. The system's use indirectly leads to harm (increased traffic accidents) due to the pressure it places on riders to meet strict deadlines. The new feature is a response but does not eliminate the underlying AI system's role in causing harm. Therefore, this event qualifies as an AI Incident because the AI system's use indirectly leads to harm to people (riders' safety).
Thumbnail Image

上海交警:发现外卖平台送餐时限等设置不合理,将及时提醒

2020-09-14
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The 'helmet sensing tag' is an AI system that monitors helmet use and can intervene by reducing vehicle power. This is a use of AI in safety enforcement. Since the system is in trial and no harm has occurred, but the technology could plausibly prevent or cause harm if malfunctioning, it qualifies as an AI Hazard. The article focuses on the system's deployment and regulatory measures rather than any incident of harm caused by AI. Therefore, the event is best classified as an AI Hazard due to the plausible future impact of the AI system on rider safety and traffic law enforcement.
Thumbnail Image

C计划 | 骑手困局的舆论风波里,ta们还是缺位了

2020-09-12
China Digital Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (algorithmic delivery management) that influences riders' behavior and working conditions, leading to real harms including traffic accidents and health risks. The harm is linked to the AI system's use in managing delivery tasks and time pressures, which causes riders to take unsafe actions. The article also discusses the failure of platforms and government to adequately protect riders, reinforcing the connection between AI use and harm. Therefore, this event meets the criteria for an AI Incident due to injury and harm to a group of people caused by the AI system's use.
Thumbnail Image

深层是劳动权益困境

2020-09-15
China Finance Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by food delivery platforms to optimize delivery times and dispatch orders. These AI systems compress delivery times, indirectly causing riders to engage in unsafe behaviors such as speeding and running red lights, resulting in physical harm and risk to their health. The harm is realized and ongoing, as described by the article. The platforms' AI-driven systems are a direct contributing factor to the harm, fulfilling the criteria for an AI Incident. The article also discusses the need for better regulation and system improvements, but the primary focus is on the harm caused by the current AI system use.
Thumbnail Image

困在系统里的不止外卖骑手

2020-09-12
enorth.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (algorithmic dispatch and scheduling) that directly influence delivery workers' behavior and working conditions, leading to harm such as unsafe driving and labor rights issues. The article describes realized harm to workers' health and rights due to the AI system's use, qualifying it as an AI Incident. The platform responses are complementary information but do not negate the incident classification.
Thumbnail Image

¾­¼ÃÈÕ±¨£º½â¾öÆïÊÖÀ§¾ÖÖØÔÚÒÔÈËΪÖÐÐÄ

2020-09-14
中国经济网
Why's our monitor labelling this an incident or hazard?
The article centers on the systemic issues related to algorithmic management in the food delivery industry, emphasizing the potential risks and harms to delivery workers' safety and rights due to the current design and use of AI-driven assessment systems. While it discusses plausible future harms and the need for regulatory intervention, it does not describe a realized harm or a specific event where AI caused direct or indirect harm. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to harm but no incident has been reported yet.
Thumbnail Image

ÑëÊÓÍâÂôÆïÊÖÉú´æµ÷²é£ºÆ´Ãü¿ìÁË ¿ìÀÖÁËË­?

2020-09-16
中国经济网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of delivery platform algorithms and data-driven systems in pressuring riders to deliver within tight time limits, causing them to violate traffic laws and increasing accident risks. The AI systems' use in scheduling, timing, and penalty enforcement is central to the riders' hazardous working conditions and the resulting traffic incidents and injuries. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people through increased traffic accidents and unsafe work conditions.
Thumbnail Image

×·¸Ïʱ¼äµÄÍâÂôÆïÊÖ£¬ÈçºÎÕõÍÑϵͳÀ§¾Ö

2020-09-12
中国经济网
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based algorithms to optimize delivery times and assign orders, which directly influence the riders' behavior and working conditions. The article highlights that these algorithms compress delivery times based on aggregated rider performance data, creating a vicious cycle of increasing speed and risk. This systemic effect leads to riders frequently violating traffic laws to avoid penalties, posing harm to their own safety and that of others. The AI system's development and use are thus directly linked to harm to persons (riders and pedestrians), fulfilling the criteria for an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the ongoing harm caused by the AI system's operational use.
Thumbnail Image

新民快评|外卖骑手,到底困在哪里?

2020-09-14
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
The delivery platform's algorithmic system qualifies as an AI system because it uses algorithmic decision-making to set delivery time targets and evaluate rider performance. The system's use directly leads to harm by incentivizing dangerous behaviors that increase the risk of injury or death to riders, fulfilling the criteria for an AI Incident involving harm to health. The article provides data on traffic violations and rider casualties linked to the system's pressure, confirming realized harm rather than potential harm.
Thumbnail Image

谁偷走时间?谁争分夺秒?谁守护安全?探访外卖骑手之困!

2020-09-14
青岛新闻
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of algorithmic dispatch and time management systems that dictate delivery schedules. These AI systems' use has directly led to harm to the health and safety of delivery riders, as they are pressured to speed and violate traffic laws to meet AI-imposed deadlines. The article documents real-world consequences including traffic accidents and unsafe behaviors, fulfilling the criteria for an AI Incident. The platform responses and public debate are complementary but do not negate the realized harm caused by the AI system's use.
Thumbnail Image

追赶时间的外卖骑手,如何挣脱系统困局

2020-09-12
沈阳日报
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI algorithms to optimize and compress delivery times based on aggregated rider performance data, which leads to systemic pressure on riders to deliver faster, often at the expense of safety. This pressure results in riders engaging in risky behaviors such as running red lights and speeding, causing actual harm or risk of harm to themselves and others. The article documents these harms and the platforms' partial responses, confirming that the AI system's use has directly and indirectly led to harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in causing health and safety harms.
Thumbnail Image

也谈骑手困于系统:背后的企业价值观才是解题关键

2020-09-14
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based intelligent dispatch systems ('real-time intelligent delivery systems' using big data, AI, and deep learning) that optimize delivery routes and order assignments. These systems increase efficiency but also impose strict performance metrics and high workloads on riders, leading them to engage in risky behaviors such as traffic violations, which have caused accidents and injuries. This constitutes indirect harm caused by the AI system's use. The article also discusses the systemic nature of the problem and the need for more humane corporate values and system design, but the core issue is the AI system's role in causing harm through its operational impact on riders. Hence, this qualifies as an AI Incident.
Thumbnail Image

人是算法的尺度 算法不是商业决胜之根本

2020-09-13
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly references algorithms used in food delivery platforms that impact the working conditions and safety of delivery riders. It describes real harms experienced by these workers, including risks to their health and safety due to algorithm-driven pressures. The discussion centers on the use of AI systems (algorithms) in a way that has directly led to harm (worker exploitation, safety risks). Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly caused harm to a group of people (delivery workers).
Thumbnail Image

追赶时间的外卖骑手 如何挣脱系统困局

2020-09-13
bbrtv.com
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI algorithms to optimize delivery times based on aggregated rider performance data, which leads to progressively shorter delivery windows. This algorithmic pressure causes riders to engage in risky behaviors like running red lights and speeding, which harms their physical safety and potentially that of others. The article documents these harms as ongoing and directly linked to the AI system's operation and optimization goals. Therefore, this is an AI Incident involving harm to people due to the AI system's use and its impact on labor conditions and safety.
Thumbnail Image

外卖算法、泰勒制和囚徒困境

2020-09-13
caiweiming.blog.caixin.com
Why's our monitor labelling this an incident or hazard?
The delivery time algorithms are AI systems that set increasingly shorter delivery times, directly pressuring riders to speed and take risks, leading to more traffic accidents and injuries. This is a clear case of indirect harm to health caused by the AI system's use. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in creating unsafe conditions. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

“困在系统里的骑手”引发热议 外卖江湖“时间折叠”如何发生?

2020-09-15
南方网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as algorithms used for route optimization and delivery time prediction in food delivery platforms. The use of these AI systems has indirectly led to harm to the health and safety of delivery riders, as they are pressured to meet unrealistic delivery times, sometimes risking their lives. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (a). The article also discusses responses and potential improvements, but the primary focus is on the realized harm caused by the AI system's operation, not just potential harm or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

外卖骑手"困在系统里"该怎么解决?专家建议

2020-09-15
网易
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: the dispatch and routing algorithms that control delivery riders' work. The harms include labor rights violations and risks to rider safety caused by the AI system's use and design. The platforms' algorithmic management directly leads to these harms, making this an AI Incident. The discussion of potential improvements and governance mechanisms is complementary but does not negate the realized harms. Therefore, the classification is AI Incident.