
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A University of Washington study found that AI systems used for resume screening exhibit significant racial and gender bias, favoring white male candidates. The study tested three large language models, revealing a preference for white-associated names 85% of the time and female-associated names 11% of the time, with Black men faring the worst.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used in hiring decisions, showing that their outputs favor certain races and genders over others, leading to discriminatory hiring practices. This is a direct harm to human rights and labor rights, as protected characteristics like race and gender are discriminated against. The harm is realized and documented through empirical research, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between AI use and discriminatory harm in employment.[AI generated]