Uber Eats Settles After AI Facial Recognition Discriminates Against Driver

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Uber Eats paid a financial settlement to driver Pa Edrissa Manjang after its Microsoft-powered AI facial recognition system repeatedly failed to verify his identity, locking him out of work. The Equality and Human Rights Commission and a union supported his claim, highlighting racial bias and harm caused by the AI system.[AI generated]

Why's our monitor labelling this an incident or hazard?

The facial recognition system is an AI system used in the employment context. Its malfunction or biased performance caused direct harm to the driver by unfairly restricting his access to work opportunities, which is a violation of labor rights and potentially human rights. The payout and legal claim confirm that harm occurred. Therefore, this event qualifies as an AI Incident due to realized discriminatory harm caused by the AI system's use.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityRobustness & digital securityTransparency & explainability

Industries
Digital securityLogistics, wholesale, and retail

Affected stakeholders
Workers

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used in the employment context. Its malfunction or biased performance caused direct harm to the driver by unfairly restricting his access to work opportunities, which is a violation of labor rights and potentially human rights. The payout and legal claim confirm that harm occurred. Therefore, this event qualifies as an AI Incident due to realized discriminatory harm caused by the AI system's use.
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) used in employment verification. The AI system's use directly led to harm: racial discrimination and violation of labor and human rights by denying the driver access to work. The harm has materialized, as the driver was removed from the platform and had to pursue legal action. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's discriminatory outputs and lack of transparency and recourse.
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition...

2024-03-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) whose use directly led to discriminatory harm against a worker, constituting a violation of human rights and labor rights. The harm has materialized, as the driver was removed from the platform and lost work opportunities. The legal claim and settlement confirm the recognition of harm caused by the AI system. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (discrimination and labor rights violation).
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) used in the employment context. The AI system's use directly led to harm: racial discrimination and violation of labor rights, as the driver was denied work access due to the AI's failure to recognize him. The legal claim and payout confirm that harm occurred. Therefore, this qualifies as an AI Incident under the definitions, specifically a violation of human rights and labor rights caused by the AI system's use.
Thumbnail Image

Uber Eats courier's fight against AI bias shows justice under UK law is hard won

2024-03-28
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Uber's facial recognition ID check) whose use led to discriminatory harm against an individual, fulfilling the criteria for an AI Incident. The harm is a violation of human rights and labor rights due to racial discrimination caused by the AI system's malfunction or biased operation. The article details the direct impact on the courier's ability to work and the subsequent legal claim, settlement, and regulatory concerns. Although the system includes human review, the failure of both AI and human oversight contributed to the harm. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm has materialized and legal consequences have ensued.
Thumbnail Image

Uber Eats driver wins payout over 'racist' AI-powered facial recognition checks

2024-03-26
Mirror
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used in employment decisions. Its malfunction or biased performance caused direct harm to the driver by unfairly restricting his ability to work, which is a violation of labor and human rights. The legal action and payout confirm the harm was realized and linked to the AI system's use. Therefore, this event meets the criteria for an AI Incident due to realized harm from AI-driven discrimination.
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Express & Star
Why's our monitor labelling this an incident or hazard?
The facial recognition AI system's failure to recognize the driver, resulting in his removal from the platform and loss of livelihood, directly caused harm related to discrimination and violation of labor rights. The involvement of the Equality and Human Rights Commission and the legal claim confirm the harm's materialization. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and its discriminatory impact.
Thumbnail Image

Uber Eats settles driver's biometric ID verification discrimination case | Biometric Update

2024-03-26
Biometric Update
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used for biometric identity verification. Its malfunction (false mismatches) led to the driver's account being locked and loss of income, which is a harm to labor rights and livelihood. The involvement of the Equality and Human Rights Commission and the App Drivers and Couriers Union, as well as the settlement, confirm the harm was realized and linked to the AI system's use. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction and discriminatory impact.
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Daily Echo Sport
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition) used in employment verification. The AI system's malfunction or biased performance led to the driver's removal from work, causing harm to his livelihood and raising concerns about racial discrimination, a violation of human rights and labor rights. The legal claim and settlement confirm that harm occurred. The AI system's role is pivotal in causing this harm, meeting the criteria for an AI Incident.
Thumbnail Image

Uber Eats driver wins payout over discriminatory facial recognition checks

2024-03-26
Kent Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (facial recognition) used by Uber Eats that directly led to harm: the driver was unfairly removed from the platform due to repeated mismatches by the AI system. This caused a violation of his rights and harm to his employment, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The legal claim and payout confirm that harm occurred, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Uber Eats settles driver's facial recognition discrimination claim

2024-03-26
Personnel Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition software powered by AI) used in employment verification. The AI system's biased outputs led directly to the suspension of the driver, causing harm to his livelihood and constituting indirect race discrimination, a violation of human rights and labor rights. The legal claim and settlement confirm that harm occurred. The involvement of the Equality and Human Rights Commission and the discussion of AI's role in discrimination further support classification as an AI Incident. The event is not merely a potential risk or complementary information but a concrete case of harm caused by AI use.
Thumbnail Image

Uber Eats driver gets payout over racially-biased face scans - GG2

2024-03-26
GG2
Why's our monitor labelling this an incident or hazard?
The facial-recognition system, powered by Microsoft AI, was used in the Uber Eats app to verify drivers. The system repeatedly failed to recognize the driver, requiring multiple selfie resubmissions and ultimately leading to his dismissal. This is a direct harm caused by the AI system's malfunction or bias, resulting in racial discrimination and labor rights violations. The involvement of the Equality and Human Rights Commission and the union, as well as the legal claim and payout, further confirm the incident's nature as an AI Incident involving realized harm.
Thumbnail Image

Uber Eats courier wins payout with help of equality watchdog, after facing problematic AI checks | Equality and Human Rights Commission (EHRC)

2024-03-27
WiredGov
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition and automated verification) whose malfunction or biased operation directly caused harm to the individual by denying access to work, which is a violation of labor and human rights. The harm has already occurred and was significant enough to lead to legal action and a financial settlement. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and harm to the worker's livelihood.
Thumbnail Image

Uber Eats courier wins payout with help of equality watchdog, after facing problematic AI checks | Equality and Human Rights Commission (EHRC)

2024-03-27
WiredGov
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based facial recognition software for automated verification checks that directly led to the suspension of a driver from the Uber Eats platform, causing harm by loss of income and racial discrimination. The AI system's malfunction or biased operation resulted in a violation of labor rights and human rights, fulfilling the criteria for an AI Incident. The involvement of the Equality and Human Rights Commission and legal settlement further confirm the materialized harm and legal recognition of the AI system's problematic impact.
Thumbnail Image

UK: Uber Eats compensates victims after facial recognition problems

2024-03-27
Aspetuck News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition technology used for worker verification. The AI system's malfunction (failure to recognize the driver) directly led to harm—loss of income and wrongful banning—constituting a violation of labor rights and discrimination concerns. The involvement of the Equality and Human Rights Commission and legal proceedings further confirm the harm's seriousness. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Uber Eats courier's fight against AI bias shows justice under UK law is hard won | TechCrunch

2024-03-28
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Uber's facial recognition technology based on Microsoft's system—used for automated identity verification. The system's malfunction led to wrongful account suspension and termination, causing direct harm to the courier through racial discrimination, a violation of human rights and labor rights under UK law. The harm is realized, as evidenced by the legal claim, settlement, and the detailed account of the failure of both AI and human review. This fits the definition of an AI Incident because the AI system's use directly led to harm (discrimination and loss of livelihood). The broader discussion of regulatory challenges and legal frameworks is complementary context but does not overshadow the primary incident. Therefore, the classification is AI Incident.