AI Delivery Algorithms Cause Harm to Chinese Food Couriers

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese food delivery apps like Meituan and Ele.me use AI algorithms to set strict delivery times and automatically penalize drivers for perceived lateness, sometimes due to system glitches. This pressures riders into unsafe driving and results in financial penalties, directly harming workers' health, safety, and income. Authorities' interventions have had limited effect.[AI generated]

Why's our monitor labelling this an incident or hazard?

The delivery app uses an AI system to optimize routes and delivery times, which directly influences the riders' behavior and working conditions. The penalties for late delivery, determined by the algorithm, create pressure that may lead to unsafe practices, thus indirectly causing harm to the riders' health and safety. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to a group of people (delivery riders).[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Logistics, wholesale, and retailConsumer services

Affected stakeholders
Workers

Harm types
Physical (injury)Economic/Property

Severity
AI incident

Business function:
Logistics

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Battle the algorithms: China’s delivery riders on the edge

2021-11-07
The Advertiser
Why's our monitor labelling this an incident or hazard?
The delivery app uses an AI system to optimize routes and delivery times, which directly influences the riders' behavior and working conditions. The penalties for late delivery, determined by the algorithm, create pressure that may lead to unsafe practices, thus indirectly causing harm to the riders' health and safety. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to a group of people (delivery riders).
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithms used by delivery apps to allocate delivery times and impose penalties, which are AI systems managing logistics and worker performance. The algorithms' pressure on workers to meet unrealistic delivery times has caused dangerous driving and accidents, representing harm to health and safety (a). Additionally, the system's unfair penalties and lack of transparency contribute to labor rights violations (c). The harm is ongoing and directly linked to the AI system's use, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithms by delivery apps (AI systems) that determine delivery times and penalties, which pressure riders to engage in unsafe driving practices, causing harm to their health and safety. This constitutes direct harm (injury risk) and labor rights violations (unfair penalties, low pay, lack of protections). The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The authorities' crackdown and company responses are complementary but do not negate the realized harm. Hence, this is an AI Incident involving harm to people and labor rights violations caused by AI system use.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI algorithms in delivery apps determine delivery times and impose penalties that incentivize risky behavior among delivery riders, leading to increased accidents and health risks. The AI system's use in setting unrealistic delivery expectations and automatic fines directly contributes to harm (injury risk) to workers. This fits the definition of an AI Incident, as the AI system's use has indirectly led to harm to a group of people (delivery riders). The event is not merely a potential risk but describes ongoing harm and regulatory responses, confirming it as an incident rather than a hazard or complementary information.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven algorithms that allocate delivery times and penalties, which directly lead to dangerous driving and accidents, harming the health and safety of delivery workers. This constitutes injury or harm to a group of people (harm category a) and violations of labor rights (category c). The AI systems' development and use are central to these harms, fulfilling the criteria for an AI Incident. The article also notes ongoing regulatory responses, but the harm is current and ongoing, not merely potential or complementary information.
Thumbnail Image

Battling algorithms: China's delivery drivers on edge

2021-11-07
The Jakarta Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms (AI systems) that manage delivery timing and driver performance, which have malfunctioned or been used in ways that harm workers through unfair penalties and pressure to drive dangerously. This constitutes a violation of labor rights and harm to health and safety, fitting the definition of an AI Incident. The involvement of AI in causing these harms is direct, as the penalties and incentives stem from algorithmic decisions. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
The Jakarta Post
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI systems (algorithms) to monitor and evaluate delivery times, automatically penalizing workers for perceived lateness. The glitch causing inaccurate penalties and the algorithmic pressure to speed and break traffic rules directly or indirectly leads to harm to the health and safety of delivery riders. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (injury risk) to a group of people (delivery workers).
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge | Fin24

2021-11-07
news24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms used by delivery apps to set delivery times and penalties, which pressure workers into unsafe driving practices, causing harm to their health and safety. The algorithms' role is pivotal in creating these unsafe conditions and labor exploitation. This fits the definition of an AI Incident because the AI system's use has indirectly led to injury or harm to persons and violations of labor rights. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. The article is not primarily about responses or updates, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Delivery riders in China face exploitation as sector booms

2021-11-07
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms used by delivery apps to determine delivery times and penalties, which are AI systems influencing worker behavior and compensation. These algorithms indirectly lead to harm by encouraging dangerous driving and unfair penalties, causing injury risk and labor rights violations. Since harm is occurring and linked to AI system use, this qualifies as an AI Incident.
Thumbnail Image

Battle The Algorithms: China's Delivery Riders On The Edge

2021-11-07
International Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI algorithms used by delivery apps to allocate delivery times and impose penalties automatically. These algorithms indirectly cause harm by pressuring drivers to speed and take risks, leading to accidents and unsafe working conditions. Additionally, the algorithms contribute to labor rights violations by unfairly penalizing workers and enabling exploitative practices. The harms are realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge | Malay Mail

2021-11-07
Malay Mail
Why's our monitor labelling this an incident or hazard?
The delivery platforms use AI-based algorithms to calculate delivery times and penalties, which pressure riders to speed and take risks, resulting in harm to their health and safety. The article explicitly links algorithmic decisions to dangerous driving and financial penalties, constituting direct harm caused by AI system use. Therefore, this event qualifies as an AI Incident due to injury or harm to a group of people caused by the AI system's use.
Thumbnail Image

China's deliverers battle algorithms - Taipei Times

2021-11-07
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of delivery apps' algorithms that calculate delivery times and penalties. These algorithms' use has directly led to harm to workers' health and safety (a), as well as labor rights violations (c), through unsafe working conditions and unfair compensation practices. The article reports realized harms such as accidents and worker distress, making this an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal in causing these harms by enforcing unrealistic delivery schedules and penalties.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI algorithms used by delivery apps to allocate delivery times and impose penalties, which directly cause harm to delivery workers by pressuring them into unsafe driving and financial penalties. This constitutes harm to health and safety (a form of injury or harm to persons). The AI system's use and its malfunction or design (unrealistic timing and penalties) are central to the harm described. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm to people.
Thumbnail Image

Battling algorithms: Delivery drivers in China fight apps to arrive on time

2021-11-09
Phnom Penh Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms used by delivery apps to determine delivery times and penalties, which directly influence drivers' behavior and earnings. These algorithms have led to drivers engaging in dangerous driving to avoid fines, causing harm to their health and safety (harm category a). Additionally, the unfair penalties and working conditions constitute violations of labor rights (harm category c). The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The authorities' regulatory response is noted but does not negate the existing harm. Thus, the event is best classified as an AI Incident.
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge (The Malay Mail | Tech)

2021-11-07
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The Meituan app is an AI system that monitors delivery times and automatically penalizes workers based on its data. The glitch causing inaccurate late registration is a malfunction of this AI system, directly leading to financial harm to the delivery driver. This fits the definition of an AI Incident because the AI system's malfunction has directly led to harm (financial penalty) to a person or group of people (delivery workers).
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge (The Malay Mail | Tech)

2021-11-07
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The Meituan app uses an AI or algorithmic system to monitor delivery times and automatically penalize workers. The glitch causing inaccurate late registration is a malfunction of this AI system, directly leading to financial harm to the delivery driver. This fits the definition of an AI Incident as the AI system's malfunction has directly led to harm to a person (the worker).
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
Manila Standard
Why's our monitor labelling this an incident or hazard?
The delivery apps use AI-driven algorithms to determine routes and delivery times, which directly affect riders' pay and working conditions. The system's design and use have indirectly led to harm to the health and well-being of delivery workers, including extreme distress and fatal outcomes. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons (harm to health and well-being).
Thumbnail Image

Battle the algorithms: China's delivery riders on the edge

2021-11-07
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithms used by delivery apps to set delivery times and penalties, which are AI systems influencing human behavior. The resulting pressure causes riders to engage in dangerous driving, increasing accident risk and harm to their health. This constitutes indirect harm caused by AI system use. The harm is realized, not just potential, so this qualifies as an AI Incident. The article does not focus on regulatory responses alone, so it is not merely Complementary Information. The involvement of AI and the resulting harm to workers' health and safety fits the definition of an AI Incident.
Thumbnail Image

AFP - China-technology-society-employment FEATURE

2021-11-07
nampa.org
Why's our monitor labelling this an incident or hazard?
The Meituan app uses an AI system to track delivery times and enforce penalties or rewards. The glitch causing an inaccurate late delivery registration directly led to financial harm to the delivery driver. This is a direct harm caused by the AI system's malfunction during its use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Los algoritmos imponen ritmos infernales a los repartidores en China

2021-11-07
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as algorithms controlling delivery schedules and order assignments. These AI systems' use directly leads to harm to the health and safety of delivery workers, who are pressured to violate traffic laws and risk injury. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people. The article documents realized harm rather than just potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the core issue is the harmful impact of AI system use on workers' safety.
Thumbnail Image

Un algoritmo para que la comida llegue caliente a los clientes pone en peligro la vida de los riders chinos

2021-11-09
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms that enforce delivery times, which can be reasonably inferred as AI systems managing logistics and worker performance. The use of these AI systems has directly caused harm to riders, including traffic accidents and deaths, fulfilling the criteria for injury or harm to persons. Additionally, the labor exploitation and precarious conditions linked to these AI-driven demands constitute violations of labor rights. Hence, the event is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Los repartidores chinos son víctimas de los algoritmos

2021-11-07
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms that assign delivery tasks and set delivery times, which are AI systems influencing the couriers' behavior. The harm is realized in the form of dangerous driving and stress on couriers, constituting injury or harm to health. The AI systems' role is pivotal in creating the pressure and unsafe conditions. Hence, this qualifies as an AI Incident due to indirect harm caused by the AI systems' use.
Thumbnail Image

Los algoritmos imponen ritmos infernales a los repartidores y un ejército en bicicleta invade las calles en China " EntornoInteligente

2021-11-07
Entorno Inteligente
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as algorithms controlling delivery schedules and order assignments. The use of these AI systems has indirectly led to harm to the health and safety of delivery workers through enforced dangerous driving and excessive work pressure. This fits the definition of an AI Incident because the AI system's use has directly or indirectly caused harm to a group of people (couriers). The article reports realized harm and ongoing risks, not just potential future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Algoritmos imponen ritmos infernales a repartidores chinos

2021-11-07
Eje Central
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear as the article discusses algorithms that assign deliveries and set delivery times. The harm is realized as couriers are risking injury or death by engaging in dangerous driving behaviors to meet algorithm-imposed deadlines. This constitutes harm to the health of a group of people, fulfilling the criteria for an AI Incident. The AI system's use directly leads to unsafe working conditions and consequent harm, not just a plausible future risk. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Los algoritmos imponen ritmos infernales a los repartidores en China e incentivan la conducción peligrosa

2021-11-08
Noticias de Bariloche
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as algorithms controlling delivery assignments and timing, which directly influence the behavior and safety of couriers. The harm includes physical risk from dangerous driving and mental health impacts, fulfilling the criteria of injury or harm to persons. The AI system's use is a contributing factor to these harms, making this an AI Incident rather than a hazard or complementary information. The article documents realized harm, not just potential risk, and the AI system's role is pivotal in causing these harms.
Thumbnail Image

Los algoritmos imponen ritmos infernales a los repartidores y un ejército en bicicleta invade las calles en China

2021-11-07
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithms that assign orders and set delivery times, which are AI systems as they perform complex decision-making and optimization tasks. The pressure from these AI systems leads to dangerous driving behaviors and has caused real harm to workers, including injury risk and a suicide incident. Therefore, the event involves the use of AI systems leading indirectly to harm to people, fitting the definition of an AI Incident under harm category (a) injury or harm to health of a group of people.
Thumbnail Image

los algoritmos enfrentan a los repartidores los peligros de la calle y les imponen ritmos infernales - Es de Latino News

2021-11-08
Es de Latino, Noticias en español para Latinos.
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms by delivery platforms to set delivery times and assign orders, which directly pressures couriers to take dangerous risks on the road. This has caused actual harm to the health and safety of workers, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to persons. The AI system's role is pivotal as it enforces the delivery schedules and penalties that drive unsafe behavior. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

·被算法“驱赶”的外卖骑手,何时能从容跑单?

2021-11-29
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI-based algorithmic system used to assign delivery orders and evaluate rider performance. The system's outputs and penalties directly lead to harmful behaviors and stress among riders, including traffic violations and unsafe riding practices, which pose injury risks. The system also unfairly penalizes riders due to inaccurate location tracking and failure to consider real-world constraints, causing financial harm and psychological stress. These harms fall under injury/harm to health and violation of labor rights. The AI system's development and use are central to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

工人日报:被算法"驱赶"的外卖骑手,何时能从容跑单?

2021-11-28
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that manages delivery logistics and performance evaluation. The system's malfunction or design flaws (e.g., inaccurate GPS positioning, unrealistic delivery time calculations, inflexible penalty mechanisms) have directly contributed to harms experienced by delivery workers, including economic penalties, unsafe behaviors, and stress. These harms fall under injury or harm to groups of people and harm to communities. Since the harm is realized and linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

工人日报:被算法“驱赶”的外卖骑手,何时能从容跑单?

2021-11-28
The Paper
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of algorithmic systems in managing delivery orders, timing, and penalties for delivery workers. The AI system's decisions cause workers to engage in risky behaviors and suffer financial penalties, which constitute harm to their health and economic well-being. The system's malfunction or rigid design (e.g., ignoring real-world constraints like weather or traffic) exacerbates these harms. Hence, the event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use.
Thumbnail Image

美团财报电话会:进一步加强骑手权益保障 持续推动配送算法透明化

2021-11-26
Techweb
Why's our monitor labelling this an incident or hazard?
The article discusses Meituan's use of AI-driven delivery dispatch algorithms and their transparency initiatives, but it does not report any incident or harm caused by these AI systems. The event is about company responses to regulatory actions and efforts to improve rider welfare and algorithm transparency. This fits the definition of Complementary Information, as it provides updates and governance responses related to AI systems without describing new harm or plausible future harm.
Thumbnail Image

被算法“驱赶”的外卖骑手,何时能从容跑单?

2021-11-29
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithmic systems (AI systems) used to assign and schedule delivery tasks for riders. The harms described include physical risk from unsafe riding behaviors induced by algorithmic pressure, economic harm from fines and reduced income, and psychological stress. These harms are directly linked to the AI system's use and its failure to consider real-world constraints, thus meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents ongoing harm experienced by riders due to the AI system's operation.
Thumbnail Image

被算法“驱赶”的外卖骑手,何时能从容跑单?

2021-11-29
bbrtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (algorithmic delivery management and routing) that controls order assignments and delivery timing. The system's outputs cause riders to engage in unsafe behaviors to avoid penalties, indicating direct harm to their health and safety. Additionally, the system's unfair evaluation and penalty mechanisms harm workers economically and violate labor rights. These harms are directly linked to the AI system's use and malfunction (e.g., ignoring real-world constraints, inaccurate location tracking). Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury risk and labor rights violations.
Thumbnail Image

被算法"驱赶"的外卖骑手,何时能从容跑单?

2021-11-29
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI algorithms in dispatching and routing food delivery riders, showing how these systems directly influence riders' behaviors and working conditions. The harms described include physical injury risks from unsafe riding practices induced by algorithmic pressure, financial penalties, and psychological stress. These harms are directly linked to the AI system's operation and its failure to adapt to real-world complexities like weather, traffic, and restaurant delays. The presence of actual harm to persons and communities caused by the AI system's use meets the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses governance and social responses but the primary focus is on the realized harms caused by the AI system's deployment.
Thumbnail Image

被算法"驱赶"的外卖骑手,何时能从容跑单?

2021-11-29
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI algorithms and systems used to assign delivery orders, calculate delivery times, and evaluate rider performance. The system's errors and rigid algorithms cause riders to engage in risky behaviors (e.g., running red lights, riding on pedestrian bridges), suffer financial penalties, and face unfair treatment without effective recourse. These outcomes represent direct harm to the health, safety, and labor rights of the delivery workers. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

算法不能逼外卖小哥狂奔

2021-11-30
jjckb.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (algorithms) in food delivery platforms that directly influence delivery workers' behavior and working conditions. The algorithms' optimization for efficiency indirectly leads to harm to the health and safety of delivery workers due to increased pressure and unsafe practices. This constitutes an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (delivery workers). The article also discusses regulatory responses, but the primary focus is on the harm caused by the AI system's use, not just complementary information or potential hazards.