Uber Eats Sued Over Alleged Racial Bias in Facial Recognition AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pa Edrissa Manjang, a Black Uber Eats courier in London, is suing the company after being dismissed due to repeated failures of its facial recognition AI, which he claims is racially biased. The AI system's alleged inability to accurately verify his identity led to his suspension, raising concerns of discrimination and labor rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The facial recognition software is an AI system used to verify driver identity. The driver's repeated false negatives and account suspension indicate a malfunction or bias in the AI system's operation. This has caused harm in the form of racial discrimination and labor rights violations, as the driver was unfairly suspended and subjected to repeated verification attempts. The legal case and tribunal proceedings confirm that harm has occurred and is being contested. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's use and its discriminatory impact.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsRobustness & digital securityTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Food and beveragesLogistics, wholesale, and retailConsumer servicesDigital security

Affected stakeholders
Workers

Harm types
Human or fundamental rightsEconomic/PropertyPsychological

Severity
AI incident

Business function:
Human resource managementMonitoring and quality control

AI system task:
Recognition/object detection

In other databases

Articles about this incident or hazard

Thumbnail Image

Delivery driver sues Uber Eats in London over 'racist' facial recognition software | Express Digest

2022-07-26
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used to verify driver identity. The driver's repeated false negatives and account suspension indicate a malfunction or bias in the AI system's operation. This has caused harm in the form of racial discrimination and labor rights violations, as the driver was unfairly suspended and subjected to repeated verification attempts. The legal case and tribunal proceedings confirm that harm has occurred and is being contested. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's use and its discriminatory impact.
Thumbnail Image

Uber Eats treats drivers as 'numbers not humans', says dismissed UK courier

2022-07-27
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (facial recognition software) in the employment context. The alleged racial bias in the AI system's operation has directly led to the dismissal of a worker, which constitutes a violation of labor rights and discrimination, fitting the definition of an AI Incident under violations of human rights or labor rights. The harm is realized, not just potential, as the dismissal and legal claim are ongoing consequences of the AI system's use. Hence, this is classified as an AI Incident.
Thumbnail Image

Driver sues Uber Eats over 'racist' facial recognition software

2022-07-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used to verify driver identity. The driver's claim that the system is racially biased and has led to wrongful suspensions constitutes a violation of rights (race discrimination and harassment). The legal case and judge's decision to allow the claim to proceed confirm that the AI system's use has directly or indirectly led to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Uber Eats treats drivers as 'numbers not humans', says dismissed UK courier

2022-07-27
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by Uber Eats for worker verification. The AI system's outputs are alleged to be racially biased, leading to wrongful dismissal and racial harassment claims, which are violations of labor and human rights. The harm has already occurred, as the driver was dismissed based on the AI system's decisions. This meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of rights and racial discrimination).
Thumbnail Image

Uber Eats sued over 'racist' facial recognition checks

2022-07-26
The Sunday Times
Why's our monitor labelling this an incident or hazard?
The facial recognition app is an AI system used for identity verification. The alleged racial bias in the algorithm caused harm to the courier by leading to his dismissal, which is a violation of labor and potentially human rights. The AI system's use directly contributed to this harm, qualifying the event as an AI Incident under the framework.
Thumbnail Image

Courier sues Uber Eats over 'racist' facial recognition dismissal

2022-07-28
UKTN (UK Tech News)
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used for identity verification. The alleged racial bias and incorrect identification leading to dismissal directly harm the courier's labor rights and constitute discrimination, a violation of human rights. The tribunal's acceptance of the claim confirms that the AI system's malfunction or biased use has caused or contributed to harm. Hence, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Uber Eats treats drivers as 'numbers not humans', says dismissed UK courier - Pehal News

2022-07-28
Pehal News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by Uber Eats for driver verification. The alleged racial bias in this AI system has directly led to the dismissal of a driver, which constitutes a violation of labor and human rights. The harm is realized and ongoing, as legal action is underway. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and racial discrimination).