UK Home Office Scraps Discriminatory Visa Algorithm After Legal Challenge

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK Home Office discontinued its AI-powered visa application screening algorithm after legal action by Foxglove and the Joint Council for the Welfare of Immigrants. The system, which used nationality as a key factor, led to discriminatory outcomes against applicants from poorer and non-white countries, violating equality and human rights laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The algorithm is an AI system used to process visa applications automatically and assign risk categories based on input data including nationality. Its use directly led to discriminatory outcomes against certain nationalities, constituting harm through violation of rights and institutional racism. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of racial discrimination and breach of equality laws. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Organisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

Home Office drops 'racist' algorithm from visa decisions

2020-08-04
Yahoo News
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used to process visa applications automatically and assign risk categories based on input data including nationality. Its use directly led to discriminatory outcomes against certain nationalities, constituting harm through violation of rights and institutional racism. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of racial discrimination and breach of equality laws. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

UK commits to redesign visa streaming algorithm after challenge to 'racist' tool

2020-08-04
Yahoo News
Why's our monitor labelling this an incident or hazard?
The visa streaming tool is an AI system that automatically grades visa applications using a risk rating algorithm. Its use has directly led to discriminatory harm against applicants based on nationality, violating equality and human rights laws. The legal challenge and government response confirm the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to realized violations of rights and discriminatory harm caused by the AI system's outputs and use in decision-making.
Thumbnail Image

Home Office to scrap 'racist algorithm' for UK visa applicants

2020-08-04
The Guardian
Why's our monitor labelling this an incident or hazard?
An AI system (the 'streaming algorithm') was used in visa application processing, and its operation has been linked to discriminatory outcomes, which constitute violations of human rights and potentially labor rights. The decision to scrap the algorithm is a direct response to these harms. Since the algorithm's use has directly led to harm through biased decision-making affecting visa applicants, this qualifies as an AI Incident. The event involves the use and malfunction (biased operation) of an AI system causing harm to individuals' rights, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Home Office drops 'biased' visa algorithm

2020-08-04
Financial Times News
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used in visa application decisions. Its use led to discriminatory outcomes against certain nationalities, constituting a violation of human rights and discrimination (harm category c). The harm has already occurred as the system was in use since 2015 and caused biased visa refusals. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through racial discrimination. The article focuses on the harm caused and the response to it, not just potential or future harm or general AI news.
Thumbnail Image

UK agrees to redesign 'racist' algorithm that decides visa applications

2020-08-04
CNET
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used to classify visa applications based on risk, including nationality, which has led to discriminatory treatment of applicants, a violation of rights under the UK Equality Act. The harm is realized, as the algorithm's outputs have influenced visa decisions affecting people's lives and opportunities. The legal challenge and suspension of the tool further confirm the direct link between the AI system's use and harm. Hence, this is an AI Incident due to violations of human rights and discriminatory harm caused by the AI system's use.
Thumbnail Image

Home Office scraps 'racist' visa algorithm

2020-08-04
The Independent
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used to assign risk ratings to visa applicants, influencing decisions that have led to discriminatory outcomes. The harm is realized as racial discrimination and disproportionate visa refusals, which constitute violations of human rights and legal obligations. The Home Office's acknowledgment of the issue and decision to suspend and redesign the system further supports the presence of harm caused by the AI system's use. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in decision-making.
Thumbnail Image

UK to stop using 'racist' visa algorithm after legal challenge

2020-08-04
Euronews English
Why's our monitor labelling this an incident or hazard?
The visa streaming tool is an AI system that grades visa applications using a traffic light risk rating, which is a form of algorithmic decision-making. The system's use has directly led to discriminatory treatment of applicants based on nationality, a violation of human rights and labor rights protections. The NGOs' legal challenge and the government's decision to stop using the algorithm confirm that harm has occurred. The harm is not hypothetical but realized, as applicants from certain countries faced more scrutiny, delays, and higher refusal rates. This fits the definition of an AI Incident because the AI system's use directly caused violations of rights and harm to people.
Thumbnail Image

U.K. Immigration Lawyers Fought a Racist Algorithm and Won

2020-08-05
VICE
Why's our monitor labelling this an incident or hazard?
The automated decision-making system (Streaming Tool) is an AI system used to classify visa applicants by risk level. Its use led to a legal complaint alleging discriminatory outcomes, which implies violations of rights (potentially human rights or labor rights). The suspension of the system indicates that harm or risk of harm was realized or at least strongly evidenced. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and prompted legal action and policy change.
Thumbnail Image

UK visa: UK suspends digital visa processing tool amid allegations of racism

2020-08-04
The Financial Express
Why's our monitor labelling this an incident or hazard?
The AI system (the Streaming Tool) was used to assign risk scores to visa applicants, influencing visa decisions. The tool's design led to discriminatory outcomes against certain nationalities, constituting a violation of rights and racial discrimination, which are harms under the AI Incident definition. The harm has already occurred as applications were unfairly scrutinized and refused. Therefore, this qualifies as an AI Incident. The article also discusses the suspension and redesign of the tool, but the primary focus is on the discriminatory impact and legal challenge, not just a governance response, so it is not merely Complementary Information.
Thumbnail Image

UK ditches visa algorithm accused of creating 'speedy boarding for white people'

2020-08-04
The Next Web
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used in visa application processing, and its use has directly led to discriminatory harm against applicants from certain nationalities, which is a violation of rights under the Equality Act. The harm is realized and systemic, as described by the advocacy groups and the government's response to scrap the tool. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's use and harm to people through discrimination.
Thumbnail Image

The UK is dropping an immigration algorithm that critics say is racist

2020-08-05
MIT Technology Review
Why's our monitor labelling this an incident or hazard?
An AI system (the immigration algorithm) is explicitly involved in processing visa applications. Its use has directly led to harm by creating a racially biased system that disadvantages people of color and those from poorer countries, which is a violation of human rights and labor rights protections. The harm is realized, not just potential, as the system has been in use since 2015 and has affected visa application outcomes. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and discriminatory harm.
Thumbnail Image

UK suspends digital visa processing tool amid allegations of racism

2020-08-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The AI system (the Streaming Tool) was used in visa processing and assigned risk ratings that influenced visa outcomes. The tool's design and operation led to discriminatory treatment of applicants from certain nationalities, amounting to racial discrimination and a breach of legal protections. This is a direct harm to human rights and fundamental rights, fulfilling the criteria for an AI Incident. The suspension and redesign of the tool are responses to this harm but do not negate the fact that harm occurred during its use.
Thumbnail Image

World News | UK Suspends Digital Visa Processing Tool Amid Allegations of Racism | LatestLY

2020-08-04
LatestLY
Why's our monitor labelling this an incident or hazard?
The AI system (the Streaming Tool) was used in visa processing and assigned risk ratings that disproportionately and unfairly targeted certain nationalities, leading to discriminatory outcomes and breaches of the UK's Equality Act 2010. This constitutes a violation of human rights and labor rights protections, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as evidenced by the legal challenge and suspension of the tool.
Thumbnail Image

Home Office drops 'racist' visa algorithm

2020-08-05
Computer Weekly
Why's our monitor labelling this an incident or hazard?
The visa streaming algorithm is an AI system that assigns risk scores influencing visa application outcomes. Its use has been challenged for racial discrimination, which is a violation of fundamental rights. The Home Office's decision to suspend and redesign the system acknowledges the harm caused. Since the AI system's use has directly led to discriminatory harm against migrants, this qualifies as an AI Incident under the framework's criteria for violations of human rights and discriminatory harm caused by AI systems.
Thumbnail Image

Home Office to scrap algorithm which secretly assigns 'risk score' to some nationalities by design

2020-08-04
inews.co.uk
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system that assigns risk scores influencing visa decisions. It has been shown to discriminate against certain nationalities, causing harm to individuals by unfairly denying visas, which constitutes a violation of rights and discriminatory harm. The harm is realized, not just potential, as visa refusals have occurred, impacting people's lives. The Home Office's acknowledgment and decision to scrap the tool further confirm the incident's significance. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

UK to end controversial visa screening algorithm, rights group says

2020-08-04
POLITICO
Why's our monitor labelling this an incident or hazard?
An AI system (the visa screening algorithm) was used in decision-making for visa applications. Its use led to discriminatory outcomes favoring applicants from predominantly white countries, which is a violation of rights and thus harm has occurred. The discontinuation of the algorithm is a response to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to a breach of obligations intended to protect fundamental rights (non-discrimination).
Thumbnail Image

Home Office to shelve "racist" visa algorithm following landmark legal challenge - NS Tech

2020-08-04
NS Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an algorithmic decision system used to score visa applications) whose use directly caused harm in the form of racial discrimination and violation of equality rights. The harm is realized and ongoing, as the system has been in use since 2015 and affected visa applicants. The legal challenge and subsequent shelving of the algorithm confirm the system's role in causing harm. Therefore, this is an AI Incident due to violations of human rights and discriminatory harm caused by the AI system's outputs and feedback loop.
Thumbnail Image

Home Office to scrap 'racist' AI tool for visa applications | IT PRO

2020-08-04
IT PRO
Why's our monitor labelling this an incident or hazard?
The AI system in question was used to make automated decisions on visa applications, assigning risk ratings that influenced outcomes. The system's use of racially biased data and resulting discriminatory impact on applicants constitutes harm to human rights and communities. The Home Office's decision to scrap the tool acknowledges these harms. Since the AI system's use directly led to discriminatory outcomes and harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Fine, We'll Stop Using Our Racist Algorithm, Sighs Home Office

2020-08-05
Gizmodo
Why's our monitor labelling this an incident or hazard?
The algorithm qualifies as an AI system because it was used to make decisions on visa applications by analyzing applicant data and refining its processes based on outcomes. The use of nationality as a factor and the feedback loop caused discriminatory harm, which constitutes a violation of human rights and equality laws. This harm has already occurred, as evidenced by the legal challenge and the decision to discontinue the algorithm. Therefore, this event is an AI Incident due to the direct involvement of an AI system in causing violations of rights and discriminatory harm.
Thumbnail Image

UK commits to redesign visa streaming algorithm after challenge to 'racist' tool (Natasha Lomas/TechCrunch)

2020-08-04
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (the visa application streaming algorithm) whose use has been legally challenged due to alleged bias and racism, indicating violations of rights. The suspension of the tool follows concerns that the AI system's outputs have directly or indirectly led to harm through discriminatory treatment of visa applicants. Therefore, this qualifies as an AI Incident because the AI system's use has caused or contributed to harm related to human rights violations.
Thumbnail Image

The UK is dropping an immigration algorithm that critics say is racist (Will Heaven/Technology Review)

2020-08-05
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an algorithm used to process visa applications) whose use has been criticized for racial bias, a violation of human rights. The harm is realized as the system's use has led to discriminatory treatment based on nationality, which is a protected characteristic. The ongoing litigation and the Home Office's decision to stop using the algorithm and redesign it further confirm the recognition of harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Home Office scraps 'racist' immigration algorithm - Personnel Today

2020-08-06
Personnel Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the 'streaming algorithm') used in immigration decisions. The system's use has directly led to discriminatory outcomes, favoring certain nationalities and races, which is a violation of human rights and labor rights protections. The Home Office's decision to discontinue the algorithm pending redesign acknowledges the harm caused. The harm is realized, not just potential, as campaigners and legal groups have challenged the system's fairness and bias. Hence, this is an AI Incident due to the AI system's use causing violations of rights and harm to communities through systemic racial bias.
Thumbnail Image

U.K. To Redesign 'Racist' Visa Algorithm After Backlash

2020-08-05
Newsy
Why's our monitor labelling this an incident or hazard?
The visa processing system uses an algorithm that classifies applicants based on nationality, which is a proxy for race or ethnicity, leading to discriminatory outcomes. This constitutes a violation of rights under applicable law protecting against discrimination. The algorithm's use has directly led to harm in the form of unfair treatment and potential denial of visas to certain groups, which fits the definition of an AI Incident. The government's response to suspend and redesign the algorithm confirms the recognition of harm caused by the AI system.
Thumbnail Image

Home Office to end use of 'racist algorithm' for UK visa decisions in face of legal challenge by migrants' rights group

2020-08-04
Politics Home
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the streaming algorithm) used in visa application processing. The system's use has directly led to violations of rights, specifically discriminatory treatment based on nationality, which constitutes a breach of fundamental rights and institutional racism. The decision to end the use of this algorithm follows recognition of these harms, making this an AI Incident due to realized harm from the AI system's use in government decision-making.
Thumbnail Image

UK: Threat of legal challenge forces Home Office to abandon "racist visa algorithm"

2020-08-04
Statewatch
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used to make decisions on visa applications by classifying applicants into different processing streams. The use of biased criteria such as nationality and the resulting discriminatory impact on people of color and poorer countries constitutes a violation of human rights and labor rights protections. The harm is realized and systemic, as described by the civil society groups and the Home Office's response. The judicial review and discontinuation of the system confirm the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in decision-making leading to discrimination and rights violations.
Thumbnail Image

UK's 'racist' visa screening system scrapped by government after legal challenge

2020-08-04
Greatest Hits Radio (Dorset)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an algorithmic visa application screening tool) that was used to make decisions affecting individuals' visa approvals. The system's outputs directly influenced visa decisions, leading to discriminatory harm against applicants from certain nationalities, which is a violation of human rights and equality laws. The harm has already occurred, as evidenced by the legal challenge and the government's decision to scrap the system. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and realized harm (discrimination and violation of rights).
Thumbnail Image

Home Office drops 'racist' algorithm from visa decisions - The Dundee Messenger

2020-08-04
The Dundee Messenger
Why's our monitor labelling this an incident or hazard?
The algorithm is an AI system used to automatically process visa applications and assign risk ratings based on factors including nationality. The use of nationality as a factor and the feedback loop caused discriminatory treatment, which is a violation of rights and constitutes harm to individuals and communities. The harm is realized and ongoing, as the system was used for years. The Home Office's decision to stop using the system and redesign it is a response to this harm. Therefore, this event meets the criteria for an AI Incident due to violations of rights and discriminatory harm caused by the AI system's use.
Thumbnail Image

Home Office drops 'racist' algorithm from visa decisions - BBC News - Quinta's weblog

2020-08-07
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The algorithm qualifies as an AI system because it automatically processes visa applications and assigns risk categories based on multiple inputs, including nationality. Its use directly led to discriminatory harm against applicants from certain nationalities, constituting a violation of human rights and equality laws. The harm is realized and ongoing until the system is suspended. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of an AI system causing violations of rights and discriminatory harm.
Thumbnail Image

Home Office Scraps 'Racist' Visa System Following Court Challenge

2020-08-04
HuffPost UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the visa application streaming algorithm) whose use directly led to discriminatory treatment of visa applicants based on nationality, a form of racial discrimination. This constitutes a violation of human rights and legal obligations, fulfilling the criteria for an AI Incident. The harm is realized and ongoing until the system is scrapped. The judicial review and subsequent decision to abandon the system confirm the AI system's role in causing harm. Hence, the classification as AI Incident is appropriate.