Dutch Employers' Use of AI Recruitment Algorithms Leads to Discriminatory Hiring Practices

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Dutch employers widely use AI algorithms in recruitment, often via platforms like LinkedIn. Research by the College for Human Rights reveals these systems frequently cause discrimination and exclusion, particularly against women and people with disabilities, while employers remain largely unaware of the risks and rarely check for fairness.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses the use of algorithmic systems (AI systems) in recruitment that cause indirect harm by discriminating against candidates based on gender, disability, or other protected characteristics. This harm is realized and ongoing, as employers are often unaware of the discriminatory effects of these AI systems. The harm falls under violations of labor and human rights. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm through discriminatory hiring practices.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityTransparency & explainabilityRobustness & digital securityHuman wellbeing

Industries
Business processes and support servicesMedia, social platforms, and marketing

Affected stakeholders
WomenOther

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
Human resource management

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Werkgevers gebruiken vaak discriminerende algoritmes zonder dat ze het zelf doorhebben

2022-09-01
Trouw
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithmic systems (AI systems) in recruitment that cause indirect harm by discriminating against candidates based on gender, disability, or other protected characteristics. This harm is realized and ongoing, as employers are often unaware of the discriminatory effects of these AI systems. The harm falls under violations of labor and human rights. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm through discriminatory hiring practices.
Thumbnail Image

'Sollicitatiealgoritmes kunnen leiden tot discriminatie'

2022-08-31
BNR Nieuwsradio
Why's our monitor labelling this an incident or hazard?
The article explicitly refers to algorithms used in hiring that learn and apply selection patterns, leading to discriminatory outcomes. This is a direct example of AI system use causing harm through labor market discrimination, a violation of human and labor rights. The harm is realized as employers unknowingly exclude certain candidates due to algorithmic bias, fulfilling the criteria for an AI Incident. The article also calls for awareness and government intervention, but the primary focus is on the existing discriminatory impact of AI systems in recruitment.
Thumbnail Image

Vrijwel alle werkgevers gebruiken algoritmes voor sollicitaties, bewustzijn risico op discriminatie en uitsluiting laag - Emerce

2022-09-01
Emerce
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms in recruitment and selection, which can be reasonably inferred as AI systems given their role in candidate evaluation. The low awareness of discrimination risks and lack of fairness controls indicate a plausible risk of harm (discrimination and exclusion) to candidates, which are violations of labor and human rights. Since no specific incident of harm is reported, but the conditions for potential harm are present, this constitutes an AI Hazard rather than an AI Incident. The event highlights a systemic risk that could plausibly lead to harm if unaddressed.
Thumbnail Image

Mensenrechterorganisatie waarschuwt voor algoritmes LinkedIn

2022-08-31
Techzine.nl
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recruitment algorithms used by companies and platforms like LinkedIn. It identifies harms related to discrimination and exclusion in hiring, which are violations of labor and human rights. Although no specific incident of harm is described as having occurred, the article highlights ongoing discriminatory practices and the lack of transparency and oversight, implying that harm is already happening or very likely. Therefore, this qualifies as an AI Incident because the use of AI systems in recruitment has directly or indirectly led to violations of rights and discriminatory harm. The article also calls for regulatory and governance responses, but the primary focus is on the existing harms caused by AI recruitment algorithms.
Thumbnail Image

Algoritmes domineren personeelswerving

2022-08-31
Computable
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems (algorithms) in recruitment and selection processes, including automated assessments and targeted advertising. It highlights risks of discrimination and exclusion, which are recognized harms under the framework. However, the article does not describe a specific event where these harms have directly or indirectly occurred, nor does it report a near miss or credible imminent risk event. Instead, it presents survey data and expert commentary to raise awareness and inform stakeholders about these risks. This fits the definition of Complementary Information, as it supports understanding of AI impacts and governance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Nagenoeg alle werknemers zetten algoritmes in voor werving van personeel | Executive People

2022-09-01
executive-people.nl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithmic systems for recruitment and selection, which qualifies as AI systems. It discusses the potential for discrimination and exclusion, which are violations of labor rights and human rights, but does not report any specific cases where harm has occurred. Therefore, it does not meet the threshold for an AI Incident. It also does not describe a specific event or circumstance where harm could plausibly occur imminently, so it is not an AI Hazard. Instead, it provides complementary information about the current landscape, employer awareness, and risks related to AI use in hiring, which fits the definition of Complementary Information.