Bias in UK's AI System for Detecting Benefits Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK's AI system for detecting welfare fraud, used by the Department for Work and Pensions, has been found to unfairly target minorities based on age, disability, marital status, and nationality. Internal assessments revealed significant bias, potentially violating human rights by disproportionately selecting certain groups for fraud investigation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An automated, ML-based system used in real-world decision-making has demonstrably produced biased outcomes that harm individuals’ rights by subjecting them to unwarranted scrutiny. This represents a realized discrimination harm tied directly to the system’s use.[AI generated]
AI principles
FairnessRespect of human rightsTransparency & explainabilityAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Event/anomaly detectionForecasting/predictionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Revealed: bias found in AI system used to detect UK benefits fraud

2024-12-06
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
An automated, ML-based system used in real-world decision-making has demonstrably produced biased outcomes that harm individuals’ rights by subjecting them to unwarranted scrutiny. This represents a realized discrimination harm tied directly to the system’s use.
Thumbnail Image

Britain's AI-based benefit fraud detector unfairly targets minorities

2024-12-06
Boing Boing
Why's our monitor labelling this an incident or hazard?
The article describes a live machine-learning programme used for universal credit fraud detection that has demonstrably produced ‘statistically significant outcome disparities’ against minority groups. This misuse of the AI’s outputs directly harms lawful benefit claimants, violates their rights, and constitutes realised discriminatory impact. Therefore it is an AI Incident.
Thumbnail Image

Revealed: bias found in AI system used to detect UK benefits fraud

2024-12-06
AOL.com
Why's our monitor labelling this an incident or hazard?
An AI system in active deployment has already shown ‘statistically significant outcome disparities’ that result in unequal treatment of individuals based on protected characteristics. This constitutes a violation of rights through biased decision-support, i.e. an AI Incident under the framework (human rights violation/discrimination).
Thumbnail Image

Revealed: bias found in AI system used to detect UK benefits fraud

2024-12-06
The Guardian
Why's our monitor labelling this an incident or hazard?
The machine-learning fraud-detection tool is actively influencing real-world decisions—wrongly recommending investigations into benefit claims—and has been shown to treat protected groups unfairly. This represents a realized violation of human rights and discriminatory treatment, meeting the definition of an AI Incident.
Thumbnail Image

Bias found in DWP AI system used to detect UK benefits fraud

2024-12-06
Birmingham Mail
Why's our monitor labelling this an incident or hazard?
The AI system is in active use by the Department for Work and Pensions and an internal “fairness analysis” has documented statistically significant outcome disparities that disproportionately flag certain demographic groups for investigation. This constitutes a realized harm—discrimination and unequal treatment—stemming from the AI’s decision‐making, classifying it as an AI Incident.
Thumbnail Image

AI bias uncovered in UK welfare system

2024-12-09
Computing
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect benefits fraud, and it is shown to have significant biases that disproportionately affect vulnerable groups. The harm is realized, as individuals are being wrongly flagged and subjected to intrusive investigations, causing financial and social harm. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also discusses systemic issues of transparency and oversight, but the primary focus is on the realized harm caused by the biased AI system.
Thumbnail Image

UK's AI system for welfare fraud detection faces criticism over bias and transparency

2024-12-09
Tech Monitor
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for welfare fraud detection, which involves complex decision-making and risk assessment, indicating AI system involvement. The criticisms focus on bias and lack of fairness assessments, which relate to violations of rights and potential harm to individuals or groups. The system's outputs influence human decisions, meaning the AI system's use has directly or indirectly led to potential harm. The event reports realized concerns and criticisms rather than just potential risks, so it qualifies as an AI Incident rather than an AI Hazard or Complementary Information. The presence of documented bias and lack of transparency in a government AI system affecting welfare decisions fits the definition of an AI Incident due to violations of rights and harm to communities.
Thumbnail Image

DWP warning as bank account checks begin but target 'the wrong people'

2025-01-06
Birmingham Mail
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (machine learning models) used for automated bank account checks to detect fraud, which could plausibly lead to harm such as wrongful suspicion or criminalization of innocent claimants. Since no actual harm has been reported but there is a credible risk of future harm due to misidentification and misuse, this qualifies as an AI Hazard. The article does not describe a realized AI Incident or a response to a past incident, nor is it unrelated or merely general AI news.
Thumbnail Image

DWP confirms date for Fraud, Error and Debt Bill as £35bn lost since Covid

2024-12-18
Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to identify potential fraud cases, indicating AI system involvement. However, the final decision is always human-made, and no actual harm or incident caused by AI is reported. The concerns about discrimination and fairness suggest potential risks, but these are not described as having materialized into harm. Therefore, this event does not qualify as an AI Incident or AI Hazard. Instead, it provides information about governance, oversight, and policy responses related to AI use in fraud detection, fitting the definition of Complementary Information.
Thumbnail Image

DWP Fraud, Error and Debt Bill bank account update as new powers to be debated in Parliament - Manchester Evening News

2024-12-18
Manchester Evening News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to flag potential fraud but clarifies that human decision-making remains final. There is no report of actual harm, such as wrongful denial of benefits or discrimination incidents caused by AI. The concerns raised are about potential bias and governance, which are being addressed through parliamentary debate and oversight. The event is primarily about policy and governance updates related to AI use, not about an AI incident or hazard causing or plausibly causing harm. Hence, it fits the definition of Complementary Information, providing context and updates on AI system use and governance without describing a new incident or hazard.
Thumbnail Image

New DWP update on plans to reduce fraud and error in the benefits system

2024-12-17
Daily Record
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used for fraud detection, which is an AI system involvement. However, there is no indication that the AI tools have directly or indirectly caused harm yet; rather, concerns about bias and discrimination are raised, and safeguards such as human final decision-making are emphasized. The event is primarily about policy updates, governance, and plans to reduce fraud and error, not about an actual incident or realized harm. Therefore, this is Complementary Information providing context and updates on AI use and governance in a public system.
Thumbnail Image

DWP announces bank account checks on benefit claimants will start in 2025

2024-12-17
Coventry Telegraph
Why's our monitor labelling this an incident or hazard?
The article involves an AI system used for fraud detection in welfare benefits, which is a clear AI system involvement. However, the event is about the planned use of AI and concerns about bias, with no reported realized harm or incidents caused by the AI system. The human decision-making safeguard is emphasized. Therefore, this is not an AI Incident (no harm realized) nor an AI Hazard (no explicit plausible future harm beyond general concerns). Instead, it is a governance and societal response discussion about AI use and fairness, fitting the definition of Complementary Information.