AI Resume Screening Bias Favors White Male Candidates

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A University of Washington study found that AI systems used for resume screening exhibit significant racial and gender bias, favoring white male candidates. The study tested three large language models, revealing a preference for white-associated names 85% of the time and female-associated names 11% of the time, with Black men faring the worst.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language models) used in hiring decisions, showing that their outputs favor certain races and genders over others, leading to discriminatory hiring practices. This is a direct harm to human rights and labor rights, as protected characteristics like race and gender are discriminated against. The harm is realized and documented through empirical research, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between AI use and discriminatory harm in employment.[AI generated]
AI principles
FairnessRespect of human rightsTransparency & explainabilityAccountabilityHuman wellbeing

Industries
Business processes and support services

Affected stakeholders
WomenOther

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Human resource management

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Study: AIs prefer white, male names on resumes, just like humans

2024-11-01
Ars Technica
Why's our monitor labelling this an incident or hazard?
While the study reveals clear harms—discrimination against Black and female-presenting names—it describes a controlled research experiment rather than a deployed system causing real-world hiring decisions. It is a significant research finding with governance and equity implications, enhancing understanding of AI risks rather than documenting an actual incident or immediate hazard.
Thumbnail Image

AI overwhelmingly prefers white and male job candidates in new test of resume-screening bias

2024-10-31
GeekWire
Why's our monitor labelling this an incident or hazard?
While no real hiring decisions or immediate harms were reported, the experiment clearly shows AI resume-screeners could reproduce and amplify societal biases, creating a credible risk of discriminatory practices against women and people of color. This is a case of plausible future harm rather than a documented incident of actual hires being affected.
Thumbnail Image

AI tools show biases in ranking job applicants' names according to perceived race and gender

2024-10-31
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used in hiring decisions, showing that their outputs favor certain races and genders over others, leading to discriminatory hiring practices. This is a direct harm to human rights and labor rights, as protected characteristics like race and gender are discriminated against. The harm is realized and documented through empirical research, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between AI use and discriminatory harm in employment.
Thumbnail Image

AI Tools Biased in Job Applicant Name Rankings

2024-10-31
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) in hiring, where their outputs have directly led to discriminatory harm against certain racial and gender groups, including intersectional identities. The bias in resume ranking by AI systems results in violations of human rights and labor rights, fulfilling the criteria for an AI Incident. The harm is realized and documented through empirical research, not merely potential.
Thumbnail Image

University of Washington Study Finds AI Bias in Hiring Processes Based on Race and Gender - TUN

2024-11-04
tun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (large language models) used in hiring that show discriminatory preferences based on race and gender, which is a direct violation of labor and human rights. The harm is realized as these biases affect real-world hiring outcomes, disadvantaging protected groups. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of fundamental rights in employment.