UK Home Office to Trial AI for Asylum Seeker Age Assessment Amid Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK Home Office plans to trial AI-based facial age estimation to assess disputed ages of asylum seekers, aiming for rollout in 2026. Experts and watchdogs warn the technology could misclassify children as adults, risking denial of protections and raising significant human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system for facial age estimation, which qualifies as an AI system. The event concerns the planned use (development and deployment) of this AI system to assist in age verification of asylum seekers, addressing a known problem that has caused harm. Since the AI system is not yet in use and no harm has been reported from its deployment, the event represents a plausible future risk of harm (e.g., misclassification, bias) but not an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader context and concerns but does not focus on a response or update to a past incident, so it is not Complementary Information.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
ChildrenOther

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

UK border officials to use AI to verify ages of child asylum seekers

2025-07-22
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for facial age estimation, which qualifies as an AI system. The event concerns the planned use (development and deployment) of this AI system to assist in age verification of asylum seekers, addressing a known problem that has caused harm. Since the AI system is not yet in use and no harm has been reported from its deployment, the event represents a plausible future risk of harm (e.g., misclassification, bias) but not an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader context and concerns but does not focus on a response or update to a past incident, so it is not Complementary Information.
Thumbnail Image

AI facial recognition technology to test migrants' ages

2025-07-22
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
An AI system (facial age estimation using AI-powered facial recognition) is explicitly mentioned as being developed and tested for use in age assessment of migrants. The event concerns the use of AI in a sensitive legal and human rights context, where incorrect age assessments could lead to violations of rights and harm to individuals (e.g., children denied protections). Although no harm has yet been reported, the plausible risk of harm is credible and significant. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights and harm to vulnerable individuals.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum seekers' ages

2025-07-22
Evening Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to assist in age assessment decisions for asylum seekers, which is a task with significant legal and human rights implications. Since the AI system is not yet deployed and no harm has been reported, but the use of AI in this context could plausibly lead to violations of rights or other harms if inaccurate, this qualifies as an AI Hazard rather than an Incident. The event is not merely general AI news or a response to a past incident, so it is not Complementary Information.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum...

2025-07-22
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (facial age estimation) to assess asylum seekers' ages, which is an AI system. The AI system is planned for trial and future use, so its involvement is in the development and intended use phase. The article highlights the difficulty and risks of age assessment, including potential harm to individuals wrongly classified as adults, which implicates violations of rights and harm to vulnerable groups. However, no actual harm caused by the AI system has yet occurred, as the system is not yet in operational use. Thus, the event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet materialized.
Thumbnail Image

AI to catch Channel migrants pretending to be children

2025-07-22
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
An AI system (facial age estimation technology) is explicitly mentioned as being developed and planned for use in assessing the age of asylum seekers. While no actual harm has yet occurred, the use of this AI system in a sensitive context could plausibly lead to harms such as violations of human rights (e.g., wrongful age classification leading to denial of protections for minors) or other significant harms related to asylum decisions. The article discusses the complexity and risks involved, including the inevitability of some wrong decisions. Since the AI system is not yet deployed and no harm has been reported, but plausible future harm is credible, this event qualifies as an AI Hazard.
Thumbnail Image

Asylum seekers who lie about their age to be targeted with new checks

2025-07-22
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI facial-age estimation technology trained on millions of images to estimate the age of migrants. The technology is intended to be deployed to assist in age verification, which directly affects the rights and protections granted to asylum seekers, particularly children. While the technology is not yet deployed and no harm has been reported, the use of AI in this context could plausibly lead to harm if errors occur, such as misclassifying children as adults and denying them protections. Therefore, this event constitutes an AI Hazard due to the credible risk of harm from AI-based age estimation errors affecting vulnerable individuals.
Thumbnail Image

AI to help decide whether asylum seekers are children as bombshell report unearths failings - The Mirror

2025-07-22
Mirror
Why's our monitor labelling this an incident or hazard?
The article describes the planned use of an AI system for facial age estimation in asylum seeker age assessments, which is a clear AI system involvement. However, the AI system is not yet in use or causing harm; rather, it is proposed as a replacement for previous methods. The concerns raised about accuracy, ethics, and fairness indicate potential risks that could plausibly lead to harm, such as misclassification of children as adults, which would be an AI Hazard. Since no actual harm has occurred yet, and the AI system's use is prospective, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Home Office 'pressured child migrants to say they were over 18'

2025-07-22
inews.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the planned use of an AI system (facial age estimation) for age assessment of migrants. The harms described (incorrect age assessments leading to mistreatment of child migrants) are linked to current non-AI methods, not the AI system itself, which is not yet deployed. The AI system's use is proposed to replace intrusive methods and improve assessments, but concerns about its accuracy and fairness suggest plausible future risks of harm if the AI misclassifies ages. Since no direct harm from the AI system has occurred yet, but there is a credible risk of harm from its future use, this qualifies as an AI Hazard rather than an AI Incident. The article also includes broader context and responses but does not focus primarily on governance or research updates, so it is not Complementary Information.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum seekers' ages - Jersey Evening Post

2025-07-22
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
The article discusses the planned use of an AI system for facial age estimation, which is a clear AI system involvement. However, the AI system is still in the trial phase and has not yet caused any direct or indirect harm. The concerns raised about accuracy and ethics indicate plausible risks of harm in the future, such as misclassification of ages leading to denial of rights to children, but these harms have not materialized yet. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has occurred so far.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum seekers' ages

2025-07-22
Oxford Mail
Why's our monitor labelling this an incident or hazard?
The AI system (facial age estimation) is explicitly mentioned and is intended for use in age assessment of asylum seekers, a task involving complex decision-making. The article highlights that current methods are imperfect and that AI is seen as a cost-effective alternative. However, no actual harm from AI use has yet occurred; the concerns raised relate to potential inaccuracies and ethical issues that could plausibly lead to harm, such as wrongful denial of child protections. Since the AI system's deployment is planned and the harms are potential rather than realized, this fits the definition of an AI Hazard. The article does not report any direct or indirect harm caused by the AI system at this stage, nor does it focus on responses to a past incident, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum seekers' ages

2025-07-22
The Irish News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial age estimation) to assess asylum seekers' ages, which is a development and intended use of AI. No actual harm has yet occurred since the technology is still in trial and planning stages. However, the context and expert concerns highlight the plausible risk of harm, including violations of rights and incorrect age determinations that could deny protections to children. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of human rights and harm to vulnerable communities. There is no indication of realized harm or malfunction at this stage, so it is not an AI Incident. The article is not merely complementary information since it focuses on the planned AI use and its potential implications rather than updates or responses to past incidents.
Thumbnail Image

Home Office to trial AI to check ages of asylum seekers

2025-07-22
The National
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology for facial age estimation in asylum seekers, which qualifies as an AI system. The AI system is intended for use in assessing ages, a task that directly impacts fundamental rights and protections of vulnerable individuals. Although the AI system is not yet in use and no harm has occurred, the article highlights concerns about potential inaccuracies and ethical issues that could lead to violations of rights and harm to individuals if the system is deployed. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to rights and protections of asylum seekers, particularly children.
Thumbnail Image

Artificial intelligence to be trialled for disputes over asylum seekers' ages

2025-07-22
Cambridge Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology for facial age estimation, which qualifies as an AI system. The AI system's intended use is to assess ages of asylum seekers, a complex and sensitive task with significant implications for human rights. Although the AI system is not yet in use, the article discusses credible concerns about potential inaccuracies and ethical issues that could lead to harm, such as children being wrongly classified as adults and denied protections. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to violations of rights and harm to vulnerable individuals in the future.
Thumbnail Image

UK to tender facial age estimation for migrant assessments within weeks | Biometric Update

2025-07-23
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of an AI system (facial age estimation based on biometrics) in migrant age assessments. Although no harm has yet occurred, the deployment of such AI technology in legal and social determinations could plausibly lead to harms such as violations of human rights or incorrect age classification affecting migrants' treatment. The article focuses on the upcoming tender and potential rollout, indicating a credible risk rather than an incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Over half of Channel migrants who claim to be children turning out to be adults

2025-07-24
The Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Facial Age Estimation technology) that is planned to be rolled out in the future to assist with age assessments of migrants. However, there is no indication that the AI system has yet been deployed or caused any harm or incident. The article focuses on the potential future use of AI to improve current practices, which have known issues but are not directly caused by AI. Therefore, this qualifies as Complementary Information, as it provides context and updates on AI adoption in a sensitive area without describing an AI Incident or AI Hazard.
Thumbnail Image

UK to use facial recognition AI to stop adult migrants posing as children

2025-07-22
BBC
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of an AI system (facial age estimation AI) in a government context for age verification of migrants. The AI system's use is intended but not yet implemented, and no harm has been reported so far. The context suggests plausible future harm due to potential misclassification of ages, which could affect migrants' rights and treatment, thus fitting the definition of an AI Hazard. Since no harm has yet occurred, and the article focuses on the planned use and potential implications, it is not an AI Incident or Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Asylum seekers who lie about their age to be targeted with new checks

2025-07-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-age estimation) in the development and use phases to make critical decisions about asylum seekers' ages. The AI system's outputs will directly influence whether individuals receive protections as children or are treated as adults, implicating fundamental human rights. The article reports existing harms from current age assessment methods and raises concerns that AI-based assessments could cause similar or new harms if inaccurate or unfair. Since the AI system is planned for deployment and could plausibly lead to violations of rights and harm to vulnerable individuals, this qualifies as an AI Hazard. There is no indication that harm has yet occurred due to the AI system itself, only potential harm. Therefore, the classification is AI Hazard.