Biased Algorithms Cause Discrimination in Key Decision-Making Systems

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple reports highlight how poorly designed AI algorithms have led to discriminatory outcomes in areas like hiring, healthcare, and vaccine distribution. Examples include Amazon's recruiting tool penalizing women and Stanford's vaccine algorithm disadvantaging frontline workers, demonstrating how algorithmic bias can perpetuate systemic discrimination and harm vulnerable groups.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly describes harms caused by AI systems (algorithms) in decision-making that reproduce and amplify systemic bias and discrimination, which are violations of human rights and labor rights. These harms have already occurred and are ongoing, making this an AI Incident. The discussion of legislative responses is complementary information but does not overshadow the primary focus on realized harms from biased AI systems.[AI generated]
AI principles
FairnessAccountabilityTransparency & explainabilityRespect of human rightsHuman wellbeing

Industries
Business processes and support servicesHealthcare, drugs, and biotechnologyGovernment, security, and defence

Affected stakeholders
WomenWorkers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Human resource management

AI system task:
Organisation/recommendersForecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

OP-ED: Use of algorithms can perpetuate bias

2021-02-25
York Dispatch
Why's our monitor labelling this an incident or hazard?
The article clearly describes harms caused by AI systems (algorithms) in decision-making that reproduce and amplify systemic bias and discrimination, which are violations of human rights and labor rights. These harms have already occurred and are ongoing, making this an AI Incident. The discussion of legislative responses is complementary information but does not overshadow the primary focus on realized harms from biased AI systems.
Thumbnail Image

Opinion: Use of algorithms can perpetuate bias

2021-02-25
Bangor Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (algorithms used for decision-making) and their role in causing harm through bias and discrimination, which are violations of human rights and labor rights. It references actual harms caused by such systems and the need for regulation. However, the article is an opinion piece discussing the broader issue rather than reporting a new specific incident or hazard. It provides context and advocacy for governance responses to AI harms, which fits the definition of Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

Concerns over biased algorithms grow as computers make more decisions

2021-02-23
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithms) used in critical decision-making contexts that have resulted in biased and unfair outcomes, such as the Stanford vaccine allocation algorithm and the Optum Health algorithm that disadvantaged Black patients. These outcomes constitute violations of rights and harm to communities, fulfilling the criteria for an AI Incident. The article also discusses the development and use of these AI systems and their direct role in causing harm. While it also covers legislative responses, the primary focus is on the harms caused by biased AI algorithms in practice, not just potential or future risks or complementary information.
Thumbnail Image

Opinion: Use of algorithms can perpetuate bias

2021-02-24
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly references an AI system (the recruiting algorithm) whose use led to discriminatory outcomes against women, a violation of labor and anti-discrimination rights. This is a direct example of harm caused by the development and use of an AI system. The harm is realized, not just potential, and the article discusses systemic bias perpetuated by AI algorithms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Commentary: Use of algorithms can perpetuate bias

2021-02-23
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article clearly describes harms caused by AI systems (algorithms) that have led to discrimination and bias against protected groups, which constitutes violations of human rights and labor rights. The Amazon recruiting algorithm example is a concrete case where the AI system's use led to biased outcomes disadvantaging women. This fits the definition of an AI Incident because the AI system's use directly led to harm through discriminatory decisions. The broader discussion about the need for legal reform and transparency is complementary information but the core content about realized harms from biased algorithms qualifies as an AI Incident.