UK Ministry of Justice's Controversial Murder Prediction AI System

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK Ministry of Justice, alongside police and government agencies, is developing an AI system to predict potential murderers by analyzing sensitive personal data. Criticized as 'chilling and dystopian' by civil rights groups like Statewatch, the project raises significant ethical, privacy, and human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (an algorithmic crime prediction tool) developed by the UK Ministry of Justice that uses sensitive personal data to predict individuals' likelihood of committing homicide. The system's use of private data and profiling of innocent people as potential criminals constitutes a violation of human rights and risks reinforcing institutional racism. While no direct harm is reported as having occurred yet, the nature of the system and its intended use plausibly could lead to significant harms such as discrimination and violation of rights. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights and harm to communities.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

UK's 'chilling' criminal detection algorithm

2025-04-11
Euro Weekly News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (an algorithmic crime prediction tool) developed by the UK Ministry of Justice that uses sensitive personal data to predict individuals' likelihood of committing homicide. The system's use of private data and profiling of innocent people as potential criminals constitutes a violation of human rights and risks reinforcing institutional racism. While no direct harm is reported as having occurred yet, the nature of the system and its intended use plausibly could lead to significant harms such as discrimination and violation of rights. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

'Chilling and dystopian': Britain goes full 'Minority Report' as pre-crime programme uncovered

2025-04-09
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (algorithmic murder prediction) used by the government to analyze personal data for risk assessment. While no arrests or direct harm have been reported, the system's design and potential application could plausibly lead to violations of rights and harm to individuals and communities through biased profiling and preemptive restrictions. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm stemming from the AI system's use and potential misuse.
Thumbnail Image

UK developing 'predictive tool' to determine if someone will become a killer

2025-04-09
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: a predictive tool using data-driven models to forecast future criminal behavior. The system is in development and testing, not yet causing direct harm, but the article highlights credible concerns about bias and discrimination inherent in the data and models, which could lead to significant harms such as violations of rights and disproportionate targeting of minority groups. Since no actual harm has yet occurred but plausible future harm is credible and well-documented, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. The event is clearly AI-related and involves potential harm, so it is not Unrelated.
Thumbnail Image

UK Is Going Full Minority Report With 'Murder Prediction' Research

2025-04-09
brudirect.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI algorithm designed to predict potential future killers using extensive police and sensitive personal data, which fits the definition of an AI system. The use of biased and institutionally racist data sources implies that the AI's outputs could lead to discriminatory outcomes, violating human rights and fundamental protections. The involvement of the AI system in profiling individuals based on flawed data and the potential for reinforcing systemic discrimination constitutes harm under the framework. Although the project is currently research-focused, the direct use of AI in this context and the associated risks of harm to individuals' rights and freedoms justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UK is developing AI to catch murderers before they strike

2025-04-09
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event describes the development and use of an AI system for predictive policing to assess homicide risk. The involvement of AI is explicit in the use of data science techniques for risk assessment. The project has not yet resulted in reported harm but poses a credible risk of violating human rights and causing bias against vulnerable groups. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving rights violations and discrimination.
Thumbnail Image

UK creating 'murder prediction' tool to identify people most likely to kill

2025-04-08
the Guardian
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as algorithms analyze personal data to predict homicide risk. The system's use is in development and research but directly relates to assessing individuals' likelihood to commit serious crimes, which implicates potential violations of rights and discriminatory harm. While no harm has yet occurred, the nature of the system and the sensitive data used create a credible risk of significant harm, including rights violations and social discrimination. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the system's deployment and use, especially given the concerns about bias and intrusive data use. It is not an AI Incident because the system is not yet operational or causing realized harm, nor is it merely complementary information or unrelated.
Thumbnail Image

UK developing algorithmic tool to predict potential killers

2025-04-09
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithmic tool for predicting potential killers, developed by the UK Ministry of Justice. Although the tool is currently in research phase and not causing direct harm yet, the nature of predictive policing AI systems is known to risk significant harms such as reinforcing biases, wrongful profiling, and violations of fundamental rights. The article highlights these concerns and calls for halting development, indicating credible risks. Therefore, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

Statewatch | UK: Ministry of Justice secretly developing 'murder prediction' system

2025-04-08
statewatch.org
Why's our monitor labelling this an incident or hazard?
The Ministry of Justice's 'murder prediction' system is an AI system designed to predict individuals likely to commit murder using extensive personal and sensitive data. The system's development and intended use involve profiling and risk assessment that have been shown to be racially biased and discriminatory, which constitutes a violation of human rights and fundamental rights protections. The system's outputs are intended to influence criminal justice decisions, which can lead to harm to individuals and communities, especially marginalized groups. The project is active and involves actual data use and model development, not just a theoretical risk. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly led or is leading to violations of rights and harms to communities.
Thumbnail Image

UK is going full minority report with 'murder prediction' research

2025-04-08
Engadget
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as an algorithm is used to predict potential future killers based on police and sensitive personal data. The use of such data and the nature of predictive policing algorithms can plausibly lead to violations of human rights and discrimination, constituting potential harm. Since the project is still in research phase and no actual harm or misuse has been reported, this event represents a plausible risk of harm rather than realized harm. Therefore, it qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

'Chilling' tool aims to predict who will kill by using personal data

2025-04-08
thetimes.com
Why's our monitor labelling this an incident or hazard?
The system described is an AI system using personal data and algorithms to predict future violent crimes. Although it is still in the research phase and no direct harm has been reported, the potential for harm is credible and significant, including violations of rights and possible wrongful targeting of individuals. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to individuals or communities.
Thumbnail Image

'Chilling' tool aims to predict who will kill by using personal data

2025-04-09
democraticunderground.com
Why's our monitor labelling this an incident or hazard?
The project involves the use of AI algorithms (an AI system) to predict future criminal behavior based on personal data, including sensitive information. While the system is still in development and no direct harm has been reported, the nature of the AI system's use in profiling and risk assessment in criminal justice settings could plausibly lead to violations of human rights and discrimination, which are harms under the AI Incident definition. Since harm is not yet realized but plausible, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

The UK creates a murder prediction tool to identify those who are most likely to kill | crime - ExBulletin

2025-04-09
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system designed to predict murder risk using personal data, including sensitive health and social information. The system's development and use involve analyzing data to generate risk assessments that influence decisions about individuals, which fits the definition of an AI system. The harms include violations of human rights, particularly privacy and potential discrimination against minorities and low-income groups, as highlighted by activists and researchers. These harms are direct and realized, not merely potential, given the use of sensitive data and the risk of reinforcing structural discrimination. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Government secretly using data to predict murders in real-life Minority Report project - Daily Star

2025-04-09
Daily Star
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the project uses data science and algorithmic risk assessment techniques to predict future violent offenders. The use of sensitive personal data and criminal records for predictive purposes directly implicates violations of human rights and potential discrimination, which are harms under the AI Incident definition. The project is already underway with data being used, not merely a future risk, and critics highlight the systemic bias and intrusion caused by the AI system's use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UK Government Is Secretly Building 'Murder Prediction' AI System - Decrypt

2025-04-09
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of an AI system designed to predict future crimes, which directly implicates human rights and ethical concerns. While no direct harm is reported as having occurred, the system's use of sensitive data and predictive profiling could plausibly lead to significant harms such as wrongful targeting, discrimination, and violation of privacy and rights. Therefore, this qualifies as an AI Hazard because the AI system's development and potential deployment could plausibly lead to an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

UK Goes Full 'Minority Report' With 'Murder Prediction' System

2025-04-09
WebProNews
Why's our monitor labelling this an incident or hazard?
The article describes an AI system under active development by the UK Ministry of Justice that uses large datasets from police and other sources to predict homicide risk. This system involves AI model development and use of sensitive personal data, including mental health and addiction information. The system's purpose is to profile individuals as potential criminals before any crime has been committed, which constitutes a violation of human rights and risks reinforcing structural discrimination and bias. These harms are directly linked to the AI system's use and development, fulfilling the criteria for an AI Incident. The article also highlights the system's potential to cause significant societal harm through biased profiling and privacy invasion, which are realized or imminent harms rather than mere potential risks.
Thumbnail Image

UK Police Going Full Minority Report, Building 'Murder Prediction' Tool

2025-04-09
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event describes the development and use of an AI system designed to predict individuals likely to commit serious violent crimes, including murder, based on personal and sensitive data. The system's use of biased historical data and its potential to disproportionately target marginalized and low-income populations indicate a direct link to violations of human rights and harm to communities. The article references prior instances where similar AI tools have led to biased and inaccurate outcomes, reinforcing the likelihood of harm. Given that the AI system's use is already underway and involves real data and decision-making processes affecting individuals, this constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

UK Govt Creating 'Murder Prediction' Tool To Identify Those Most Likely To Kill

2025-04-09
The People's Voice
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (predictive algorithms analyzing personal and criminal data) developed and used by the government. The system's purpose is to predict future violent crimes, which directly implicates potential violations of human rights and risks of biased, discriminatory harm to individuals and communities. While the project is currently in a research phase, the described use of sensitive data and the intended application to predict and potentially act upon individuals' future behavior constitutes a credible risk of significant harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving rights violations and harm to communities if deployed operationally.
Thumbnail Image

Government using tech to predict possible murderers: "Chilling"

2025-04-09
Newsweek
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the government uses an AI-powered predictive tool analyzing complex data to assess homicide risk. Although the tool is currently in research phase, its use in profiling individuals based on sensitive demographic and criminal data could directly or indirectly lead to violations of human rights, discrimination, and harm to communities. The article highlights credible concerns about bias and privacy violations, indicating plausible future harm if the system is deployed nationwide. Therefore, this event qualifies as an AI Hazard due to the credible risk of significant harm stemming from the AI system's use.
Thumbnail Image

Minority Report: UK forges 'murder prediction' project to stop killers

2025-04-09
euronews
Why's our monitor labelling this an incident or hazard?
The project involves the use of AI or algorithmic data analysis to predict the likelihood of individuals committing homicide, which fits the definition of an AI system's use. However, the project is currently in a research phase with no direct operational or policy changes implemented, so no realized harm has occurred yet. The potential for future harm, such as violations of rights, discrimination, or harm to communities, is plausible given the nature of the profiling and sensitive data used. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Labour working on 'dystopian murder prediction tool' to identify killers BEFORE they commit crimes - 'Deeply wrong!'

2025-04-09
GB News
Why's our monitor labelling this an incident or hazard?
The project involves the use of AI systems for predictive risk assessment of homicide, which is explicitly mentioned. The use of sensitive data and the goal to identify potential future offenders before any crime is committed indicates a plausible risk of harm, including violations of human rights such as privacy and presumption of innocence. Since no actual harm or incident is reported yet, but the system's use could plausibly lead to significant harms, this qualifies as an AI Hazard rather than an AI Incident. The concerns raised by campaigners and the nature of the project support this classification.
Thumbnail Image

UK's MoJ testing algorithms to uncover future killers

2025-04-09
theregister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (algorithms) developed to predict future criminal behavior, specifically homicide risk, based on large-scale data including sensitive personal information. The AI system's development and intended use directly implicate potential violations of human rights, including privacy and discrimination against racialized and vulnerable groups, which are harms under the framework. Although the system is currently in research and not yet operational, the documents mention plans for future operationalization, indicating a credible risk of harm. Given the direct involvement of AI in profiling individuals and the potential for significant rights violations and harm, this qualifies as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Government 'murder prediction tool' predicts who will become a killer

2025-04-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The project involves an AI system analyzing personal data to predict future violent crimes, which fits the definition of an AI system. The use is in development and research phase, with no current operational impact, so no realized harm has occurred yet. However, the nature of the system—predicting crimes before they happen and potentially profiling individuals—poses a plausible risk of significant harm including violations of human rights and discriminatory profiling. Therefore, it is an AI Hazard rather than an AI Incident. The article also includes societal and civil liberty concerns, but the primary focus is on the potential for harm rather than a realized incident.
Thumbnail Image

Statewatch | UK creating 'murder prediction' tool to identify people most likely to kill

2025-04-09
statewatch.org
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (an algorithmic predictive tool) designed to identify individuals at risk of committing serious violent crimes. Although no direct harm has been reported yet, the system's purpose and methodology pose credible risks of human rights violations, such as discrimination, wrongful profiling, and privacy breaches. Therefore, this qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving violations of rights and harm to individuals if implemented without adequate safeguards.
Thumbnail Image

NGO warns against British murder prediction system: "chilling and dystopian"

2025-04-09
THE DECODER
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the Ministry of Justice is developing a predictive system using AI techniques to assess homicide risk. The system uses sensitive data and has documented biases, which could plausibly lead to violations of human rights and harm to communities if deployed operationally. Since the system is still in the research phase and no direct harm has been reported, this event represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

UK Government's 'Murder Prediction' Program Faces Backlash Over Privacy Concerns, Bias Allegations

2025-04-09
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of machine learning algorithms analyzing sensitive personal data to predict violent crime, which qualifies as an AI system. The program is still in the research phase, so no direct harm has occurred yet, but the concerns raised about privacy violations, bias, and discrimination indicate a credible risk of harm. The potential harms include violations of privacy rights and exacerbation of social inequalities, which fall under violations of human rights and harm to communities. Since the harm is plausible but not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential risks and ethical concerns rather than reporting an actual incident of harm.
Thumbnail Image

AI murder predictor could catch killers before they strike

2025-04-09
The Telegraph
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as algorithms analyze complex data to predict criminal behavior. The use of sensitive data and the nature of the predictions raise significant concerns about potential violations of rights and discrimination. However, since the project is still in research and no direct or indirect harm has been reported or occurred, this event represents a plausible risk of harm in the future rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the credible potential for harm such as violations of rights and discriminatory outcomes if deployed operationally without safeguards.
Thumbnail Image

UK government developing 'murder prediction' program amid backlash

2025-04-09
Anadolu Agency
Why's our monitor labelling this an incident or hazard?
The event describes the development and research phase of an AI system designed to predict violent crime risk using personal data and algorithms. Although no harm has yet occurred, the nature of the system and its intended use could plausibly lead to significant harms, including violations of rights and systemic bias. The AI system's involvement is explicit, and the potential for future harm is credible and significant. Therefore, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

UK creating 'murder prediction' tool to identify people most likely to kill

2025-04-09
aol.co.uk
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as algorithms analyze personal data to predict future violent offenders. The system is in development and research phase, so no direct harm has yet occurred, but the use of sensitive data and the nature of predictive policing pose credible risks of bias, discrimination, and violation of rights. The event describes plausible future harm from the AI system's use, fitting the definition of an AI Hazard rather than an Incident. The concerns about bias and intrusive data use support this classification. The event is not merely complementary information or unrelated, as it centers on the AI system's potential for harm.
Thumbnail Image

Can governments stop killings before they happen? UK explores creating 'murder prediction' programme

2025-04-09
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system or AI-like predictive analytics to assess homicide risk based on personal and criminal history data. Although no harm has yet occurred and the project is currently research-only, the nature of the system and its intended use could plausibly lead to significant harms including violations of rights and discrimination if deployed. Therefore, this qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving violations of human rights and harm to individuals or communities. There is no indication that harm has already occurred, so it is not an AI Incident. It is more than just complementary information because it describes a concrete AI-related project with potential for harm.
Thumbnail Image

How UK's 'murder prediction' tool could predict who might kill in future

2025-04-09
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using algorithms to predict future violent crimes based on personal and criminal records. The system is in development and pilot stages, with no reported incidents of harm yet. However, the use of sensitive data and the potential for biased profiling and discrimination constitute a credible risk of harm to individuals' rights and communities. This aligns with the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and other harms in the future. Since no actual harm has occurred yet, it is not an AI Incident. The article is not merely complementary information because it focuses on the potential risks and implications of the AI system rather than updates or responses to past incidents.
Thumbnail Image

UK Gov Using Personal Data to Develop 'Murder Prediction' Tech

2025-04-10
DIGIT
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the government is developing predictive algorithms analyzing large datasets to forecast homicide risk. The use of sensitive personal data and the aim to predict criminal behavior directly relate to potential violations of human rights and discrimination, which are recognized harms under the framework. Although the system is currently at a research level and not yet operationalized, the credible risk of harm through biased profiling and infringement of rights is significant and plausible. Hence, this event fits the definition of an AI Hazard rather than an AI Incident, as no realized harm is reported yet but plausible future harm is evident.
Thumbnail Image

UK government developing homicide prediction algorithm to identify potential violent offenders

2025-04-10
TechSpot
Why's our monitor labelling this an incident or hazard?
The article describes an AI system under development that uses personal and sensitive data to predict individuals likely to commit serious violent offenses. While no harm has yet occurred since it is a research project, the system's design and data usage raise credible concerns about reinforcing institutional biases and structural discrimination, which are recognized harms under the framework. The involvement of AI in predictive policing and the potential for misuse or malfunction that could lead to violations of rights and harm to communities justifies classification as an AI Hazard rather than an Incident. The article does not report any realized harm or incident but highlights plausible future risks.
Thumbnail Image

UK Is Testing a "Murder Prediction" tool -- and It's Seriously Alarming

2025-04-10
ZME Science
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a predictive policing tool aiming to forecast future murderers based on personal and sensitive data. The system's development and intended use raise credible concerns about potential violations of human rights, discrimination, and harm to communities. Although the project is currently research-only and no actual harm has been reported, the plausible future harms are significant and well-articulated by experts and watchdogs. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article does not describe realized harm but highlights credible risks and ethical concerns, which aligns with the AI Hazard classification.
Thumbnail Image

The UK Government Is Working On An Algorithm-powered "murder Prediction" Tool - Stuff South Africa

2025-04-10
Stuff South Africa
Why's our monitor labelling this an incident or hazard?
The article describes the development and intended use of an AI system designed to predict individuals likely to commit murder before any crime occurs. This involves the use of personal data from a large population, including innocent individuals, to generate risk assessments. While no direct harm has been reported yet, the nature of the system and its potential application could plausibly lead to significant harms such as violations of fundamental rights, wrongful accusations, and social harm. The AI system's involvement is in its development and intended use, with credible risks of future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Government's 'murder prediction' tool is based on 'racist' data

2025-04-10
computing.co.uk
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the project uses algorithms and data analytics to predict crime risk. The system is in development and research phase, not yet deployed operationally, so no direct harm has occurred yet. However, the described use of biased and sensitive data, and the known risks of predictive policing AI systems causing discriminatory outcomes, means this AI system could plausibly lead to violations of rights and harm to communities. Therefore, this qualifies as an AI Hazard rather than an AI Incident, since harm is potential but not yet realized. The article focuses on the risks and criticisms of the system's development and data use, fitting the definition of an AI Hazard.
Thumbnail Image

UK Government's Secretive "Homicide Prediction" AI Project Sparks Minority Report Comparisons and Surveillance Concerns

2025-04-11
Reclaim The Net
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the project uses algorithm-based predictive analysis to identify individuals at risk of homicide. The use of sensitive personal data and the goal to predict future criminal behavior indicate the AI system's development and intended use. Although the project is currently in research phase, the mention of future operationalization implies plausible future harm. The potential harms include violations of human rights, privacy breaches, and harm to communities through wrongful profiling and surveillance. Since no actual harm has been reported yet, but the risk is credible and significant, this event is best classified as an AI Hazard.
Thumbnail Image

Junk Science and Bad Policing: The Homicide Prediction Project - Global Research

2025-04-11
Global Research
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system used by law enforcement to predict homicide risk based on personal and police data, which constitutes an AI system under the OECD definition. The system's use involves development and use phases, with the AI system playing a pivotal role in profiling individuals. The harms described include violations of human rights (profiling, bias, privacy breaches) and harm to communities (stigmatization, potential discriminatory policing). These harms are realized or ongoing, not merely potential, as the system is operational and data is being used. Thus, this qualifies as an AI Incident rather than a hazard or complementary information. The article's critical tone and detailed description of the system's data use and societal impact support this classification.
Thumbnail Image

UK Developing 'Murder Prediction' Tool, Critics Flag Privacy Concerns

2025-04-11
ndtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system designed to predict potential offenders of serious crimes, which fits the definition of an AI system. The project is still in research and not yet in operational use, so no direct harm has occurred. However, the concerns raised about privacy violations, bias, and discrimination indicate plausible future harms, including violations of human rights and structural discrimination. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harms if deployed.
Thumbnail Image

UK: Ministry of Justice secretly developing 'murder prediction' system

2025-04-12
blog.quintarelli.it
Why's our monitor labelling this an incident or hazard?
The described AI system is explicitly used to predict who might commit murder, involving profiling based on police and government data. This directly implicates violations of human rights and fundamental legal principles, fulfilling the criteria for harm under the framework. The secretive nature and the use of personal data without transparency or consent further exacerbate the harm. The AI system's use in this context is not hypothetical but ongoing, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UK Government's Secretive "Homicide Prediction" AI Project Sparks Minority Report Comparisons and Surveillance Concerns

2025-04-12
sgtreport.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, performing predictive analytics on sensitive personal and health data to forecast homicide risk. The project is currently in research phase but is planned for operationalization, implying potential future use that could lead to violations of human rights and privacy. No actual harm has been reported yet, but the plausible future harm includes wrongful profiling, discrimination, and surveillance abuses. Therefore, this event is best classified as an AI Hazard rather than an Incident, as the harm is potential and not yet realized.
Thumbnail Image

Junk Science And Bad Policing: The Homicide Prediction Project - OpEd

2025-04-11
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (predictive policing AI) in a government program that profiles individuals for potential criminality, which directly relates to violations of human rights and risks of biased discrimination against racialized and low-income communities. The article indicates that the AI system's use has already led to profiling and raises concerns about privacy and bias, which constitute harms under the framework. Even if the program is currently for research only, the direct involvement of AI in profiling individuals and the associated harms qualify this as an AI Incident rather than a mere hazard or complementary information. The harms include violations of rights and harm to communities through biased profiling and privacy threats.
Thumbnail Image

Program de "predicție a crimelor". Proiectul pentru identificarea viitorilor ucigași stârnește fiori: "Înfricoșător"

2025-04-08
Știrile ProTV
Why's our monitor labelling this an incident or hazard?
The project involves the development and use of an AI system (predictive algorithms analyzing personal and sensitive data) to assess the risk of future violent crimes. While no actual harm has been reported yet, the system's design and data usage could plausibly lead to violations of human rights, discrimination against minorities and vulnerable groups, and other significant harms if deployed operationally. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving rights violations and harm to communities. The article does not describe realized harm but highlights credible risks and societal concerns about the system's impact.
Thumbnail Image

Algoritm de "predicție a crimelor" testat în Marea Britanie. Proiectul e criticat drept distopic și înfricoșător

2025-04-08
Libertatea
Why's our monitor labelling this an incident or hazard?
The project involves an AI system designed to predict crime risk using sensitive and potentially biased data. Although it is still in research and no direct harm has been reported, the use of such predictive algorithms in criminal justice is widely recognized as potentially leading to discriminatory outcomes and violations of rights, which are harms under the AI Incident definition. Since harm is not yet realized but plausible, this qualifies as an AI Hazard. The article focuses on the potential risks and societal concerns rather than reporting an actual incident of harm caused by the AI system.
Thumbnail Image

Guvernul britanic dezvoltă un program de "prezicere a crimelor", folosind datele infractorilor

2025-04-09
comisarul.ro
Why's our monitor labelling this an incident or hazard?
The described program involves an AI system designed to predict crimes using personal and sensitive data, which fits the definition of an AI system. The use of this system directly implicates potential violations of human rights and discrimination against ethnic minorities and the poor, which are harms under the framework. The project is already in development and data use stages, with documented concerns about misuse and bias, indicating realized or ongoing harm rather than just potential. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use in predictive policing and the associated rights violations and community harm.
Thumbnail Image

Marea Britanie. Guvernul dezvoltă un sistem controversat conceput să prezică, pe baza unor algoritmi ce analizează antecedente și date personale sensibile, ce oameni au șanse mai mari de a comite infracțiuni violente sau chiar crime. Proiectul, criticat pentru riscul de a face discriminări. - Biziday

2025-04-09
Biziday
Why's our monitor labelling this an incident or hazard?
The described system is an AI system as it uses algorithms to analyze extensive personal data to generate predictions about future criminal behavior. While the project is currently in research and not yet causing realized harm, the use of sensitive data and the risk of biased or discriminatory predictions create a credible potential for harm, including violations of human rights and discrimination against vulnerable groups. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident if deployed without adequate safeguards.
Thumbnail Image

Țara care dezvoltă un sistem de "predicție a crimelor". Vor să anticipeze cine ar putea deveni criminal

2025-04-09
DCNews
Why's our monitor labelling this an incident or hazard?
The article details a government project using AI to predict potential criminals based on personal data, including sensitive information. Although the system is still in research and no harm has yet occurred, the nature of the AI system and its intended use could plausibly lead to significant harms such as violations of rights, discrimination, and social harm. Therefore, this qualifies as an AI Hazard rather than an Incident, as the harm is potential and not yet realized.
Thumbnail Image

Marea Britanie: Sistem controversat conceput să prezică oamenii cu șanse mai mari de a comite infracțiuni sau chiar crime

2025-04-10
Jurnal.md
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the project uses algorithms analyzing extensive personal and sensitive data to predict crime risk. The system is under development and not yet operational, so no direct harm has occurred yet. However, credible concerns about bias and privacy violations indicate plausible future harm, including violations of fundamental rights and harm to communities. Since harm is not yet realized but plausible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Marea Britanie dezvoltă un algoritm care "prezice" dacă cineva va deveni criminal. Criticii numesc proiectul "înfricoșător și distopic" - TechRider.ro

2025-04-11
TechRider.ro
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: an algorithm designed to predict criminality based on personal data, including sensitive health and victimization information. The system is in development and use for research purposes but involves the use of AI to make predictions about individuals' future behavior, which is inherently risky and controversial. The harms include potential violations of human rights (privacy, discrimination, presumption of innocence), and the article highlights expert criticism and legal prohibitions in the EU, underscoring the plausible risk of harm. Since no actual harm or incident is reported yet, but the system's use could plausibly lead to significant harms, this event is best classified as an AI Hazard.
Thumbnail Image

Marea Britanie elaboreaza un instrument de "prognozare a crimelor" pentru a identifica persoanele cu cea mai mare probabilitate de a ucide - Aktual24

2025-04-08
Aktual24
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the project uses algorithms to predict crime risk based on personal and criminal data. The event stems from the development and intended use of this AI system. Although the project is currently research-only and no direct harm has been reported, the article highlights credible concerns about potential future harms such as discrimination, privacy violations, and human rights breaches. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harms, but these harms have not yet materialized according to the article.
Thumbnail Image

Marea Britanie creează un instrument de "predicție a crimelor" pentru a identifica persoanele cele mai susceptibile de a ucide

2025-04-09
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms analyzing personal data to predict crime risk, which qualifies as an AI system. The project is in development and research stages, so no direct harm has yet occurred, but credible concerns about bias and discrimination indicate plausible future harm. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and harm to communities. Since no realized harm is reported, it is not an AI Incident. The focus is on potential risks rather than responses or updates, so it is not Complementary Information. It is clearly related to AI, so it is not Unrelated.
Thumbnail Image

Marea Britanie dezvoltă un algoritm pentru a identifica posibilii ucigași. "Înfricoșător și distopic", avertizează activiștii

2025-04-08
digi24.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithm analyzing personal and sensitive data to predict individuals' likelihood of committing violent crimes. The system's use of biased data and the potential for discriminatory profiling constitute violations of human rights and harm to communities. Although the project is currently in research, the article indicates that data processing and risk assessments are actively being conducted, implying realized use rather than mere potential. The harms include privacy violations, discriminatory bias, and stigmatization of individuals, especially from marginalized groups. These harms fall under the AI Incident definition, as the AI system's use has directly or indirectly led to significant harms related to rights violations and community harm. Hence, the classification is AI Incident.
Thumbnail Image

Desarrollan en Europa un programa para predecir quién cometerá crímenes violentos - El Heraldo de México

2025-04-09
heraldodemexico.com.mx
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as algorithms analyze large datasets to predict violent crime risk. The project is in development and research phase, so no realized harm is reported yet, but credible concerns about bias, discrimination, and privacy violations indicate plausible future harm. The use of sensitive personal data and predictive policing algorithms can lead to violations of human rights and harm to communities if implemented. Hence, this is an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

'Proyecto Predicción de Homicidios': así funciona la herramienta de Reino Unido para identificar a posibles asesinos

2025-04-09
La Razón
Why's our monitor labelling this an incident or hazard?
The described AI system is explicitly mentioned as using algorithms to analyze large datasets to predict homicide risk, which qualifies as an AI system. The use of this system could plausibly lead to violations of human rights and discrimination (harm category c) due to biased data and structural racism concerns. Since the project is still in development and research, with no direct harm reported yet, but credible risks of harm exist, this event fits the definition of an AI Hazard rather than an AI Incident. The concerns about bias and discrimination are well-founded and indicate plausible future harm.
Thumbnail Image

Como si fuera una película de Steven Spielberg, estas es la tecnología que predice si alguien es asesino

2025-04-09
infobae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using algorithms and personal data to predict future criminal behavior, which fits the definition of an AI system. The system is currently under development and use by government agencies, indicating involvement in the AI system's use. While the article does not report a realized harm incident, it highlights credible concerns about potential harms including discrimination, violation of rights, and social stigmatization, which could plausibly arise from the system's deployment. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harms, but no direct or indirect harm has been reported as having occurred yet.
Thumbnail Image

Reino Unido imita la ciencia ficción creando un sistema para predecir asesinatos

2025-04-11
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: an algorithmic predictive policing tool analyzing personal data to forecast murders. The system is in development and testing, so no direct harm has yet occurred, but the article details credible concerns about bias and discrimination against minorities, which are violations of human rights. The potential for such harm is plausible and significant if the system is deployed operationally. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article does not describe realized harm but focuses on the potential risks and societal implications, excluding Complementary Information or Unrelated classifications.
Thumbnail Image

Reino Unido desarrolla una herramienta de "predicción de asesinatos" para identificar a las personas con más probabilidades de matar

2025-04-09
infobae
Why's our monitor labelling this an incident or hazard?
The described AI system is explicitly mentioned as using advanced algorithms to analyze data for predicting violent crime risk, which qualifies it as an AI system. The project is currently in a research phase, so no direct harm has yet occurred, but the potential for harm is clearly articulated, especially regarding bias and discrimination against ethnic minorities and disadvantaged groups. This aligns with the definition of an AI Hazard, as the system's use could plausibly lead to violations of rights and harm to communities. Since no actual harm has been reported yet, and the main focus is on the potential risks and criticisms, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Parece ciencia ficción, pero no lo es: el Reino Unido desarrolla una herramienta para "predecir crímenes" gracias a la IA

2025-04-09
El Español
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using algorithms and big data to predict crimes, specifically homicides, by analyzing sensitive personal data. The system's use of biased data sources and the potential for false positives that could lead to wrongful suspicion or detention constitutes a plausible risk of harm to individuals' rights and communities. Since the system is still in development and used only for research, no direct harm has yet occurred, but the credible potential for harm aligns with the definition of an AI Hazard. Therefore, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Desarrollan una herramienta de "predicción de asesinatos" para identificar a las personas con más probabilidades de matar

2025-04-09
Antena 3 Noticias
Why's our monitor labelling this an incident or hazard?
The project involves an AI system analyzing sensitive personal and criminal data to predict violent crime risk, which fits the definition of an AI system. Although no direct harm has been reported, the potential for biased predictions against minorities and privacy infringements is significant and credible, indicating plausible future harm. Since the system is still in development and research, and no harm has yet materialized, it is best classified as an AI Hazard rather than an AI Incident. The concerns about bias and data use align with possible violations of rights and harm to communities, fulfilling the criteria for plausible future harm.
Thumbnail Image

¿'Minority Report' se hace realidad?... Reino Unido desarrolla una tecnología para predecir asesinatos

2025-04-09
vanguardia.com.mx
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using algorithms and data to predict future crimes, fitting the definition of an AI system. The system is currently in development and research phase, so no direct harm has yet materialized. However, the article highlights credible concerns about potential discrimination, civil rights violations, and wrongful preemptive actions, which are plausible harms under the framework. Since the AI system's use could plausibly lead to an AI Incident but has not yet done so, the correct classification is AI Hazard.
Thumbnail Image

Reino Unido desarrolla un algoritmo para cruzar tus datos personales y predecir si vas a asesinar a alguien

2025-04-09
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of an AI algorithm (predictive system using algorithms) that processes personal and sensitive data to predict violent crime risk. While no direct harm has yet occurred since the system is still in research phase, the plausible future harms include discriminatory surveillance, violation of privacy and human rights, and biased outcomes affecting marginalized communities. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms if deployed operationally. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential risks and ethical concerns of this AI system.
Thumbnail Image

Polémica en Reino Unido por un proyecto de predicción de crimenes al estilo Minority Report

2025-04-09
Clarín
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system under development that uses predictive algorithms to profile individuals as potential future criminals, involving sensitive data such as health and vulnerability markers. The system's use could plausibly lead to violations of human rights, discrimination, and harm to communities, as highlighted by experts and past evidence of bias in similar systems. Since no actual harm has been reported yet but the risks are credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the potential harms and ethical concerns of the AI system's deployment, not on responses or updates to past incidents.
Thumbnail Image

Desarrollan tecnología que predice si alguien puede tener intenciones 'macabras'

2025-04-09
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (algorithms analyzing personal data to predict crime risk) being developed and used by government agencies. While no direct harm is reported, the system's design and application involve sensitive personal data and predictive profiling, which plausibly could lead to violations of rights and other harms. The event does not describe realized harm but highlights a credible risk of future harm, fitting the definition of an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the system's development and potential impact, not on responses or updates to past incidents.
Thumbnail Image

El polémico proyecto del Reino Unido que busca identificar posibles asesinos antes de que cometan un crimen

2025-04-10
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using advanced algorithms to predict future crimes based on personal and criminal data. The system's use has already led to concerns about privacy violations, discrimination, and potential criminalization of individuals based on algorithmic predictions, which are harms to human rights and communities. These harms are directly linked to the AI system's deployment and use, fulfilling the criteria for an AI Incident. Although the article discusses potential benefits, the presence of realized harms and rights violations takes precedence over potential future harms, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El Reino Unido desarrolla un programa para "predecir asesinatos" utilizando datos personales de cientos de miles de personas, incluidos los de víctimas

2025-04-10
LA GACETA
Why's our monitor labelling this an incident or hazard?
The article describes an AI system explicitly designed to predict murders by analyzing extensive personal data, including sensitive health and demographic information. The system is under development and involves multiple government agencies sharing large datasets. Although the Ministry of Justice claims it is for research purposes, the nature of the system and the data involved pose credible risks of violations of human rights, privacy breaches, and social harms if deployed. Since no actual harm is reported yet, but the plausible future harm is significant, this fits the definition of an AI Hazard. The involvement of AI in predictive analytics and the potential for misuse or malfunction leading to harm justifies this classification.
Thumbnail Image

Polémica en Reino Unido: quieren usar IA para crear un "Minority Report" que se adelante al delito

2025-04-11
LA NACION
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the Ministry of Justice is developing predictive models analyzing large datasets to assess homicide risk. The use of sensitive personal data and predictive analytics for pre-crime risk assessment directly relates to potential violations of human rights and ethical harms. While no harm has yet occurred, the system's intended use to predict and possibly act on future crimes could plausibly lead to significant harms such as wrongful targeting, discrimination, and erosion of civil liberties. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm from the AI system's deployment.
Thumbnail Image

بریتانیا ابزار پیش‌بینی قتل می‌سازد: شناسایی افرادی که احتمال ارتکاب جرم آنها بالاست

2025-04-09
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (predictive policing algorithm) that could plausibly lead to harms such as violations of human rights (discrimination, privacy breaches) and harm to communities if deployed. Since the tool is still in the research phase and no direct harm has occurred yet, this qualifies as an AI Hazard rather than an AI Incident. The concerns about bias and potential misuse support the classification as a plausible future harm.
Thumbnail Image

جمهور - انگلیس ابزار پیش‌بینی قتل می‌سازد

2025-04-09
خبرگزاری جمهور
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the project is an algorithmic predictive policing tool using police data to forecast potential future crimes. The system's use is in development and research, not yet deployed operationally, so no direct harm has occurred. However, the nature of the system and its use of sensitive data could plausibly lead to violations of rights and harm to communities if deployed, due to known issues with bias and discrimination in predictive policing AI. Therefore, this event qualifies as an AI Hazard, reflecting a credible risk of future harm stemming from the AI system's use.
Thumbnail Image

توسعه ابزاری برای پیش‌بینی قتل!

2025-04-12
باشگاه خبرنگاران جوان | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system being developed to predict murders using sensitive personal data. Although no actual harm has been reported yet, the use of such data and the purpose of predicting criminal behavior could plausibly lead to violations of rights and other harms if deployed. Since harm is not yet realized but plausible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ایده فیلم سینمایی به واقعیت نزدیک می‌شود؛ ساخت ابزار پیش‌بینی قتل

2025-04-09
iranpressnews.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the article discusses an algorithm designed to predict potential murderers using police data. The system is under development and research, so no direct harm has yet occurred. However, the use of sensitive data and the known risks of bias in predictive policing algorithms create a credible risk of future harm, including violations of human rights and discrimination. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

توسعه ابزاری برای پیش‌بینی قتل؛ بریتانیا می‌خواهد با هوش مصنوعی قاتلان آینده را شناسایی کند

2025-04-10
بالاترین
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the Ministry of Justice is developing an algorithm to predict future murders. The system uses sensitive personal data, including mental health and addiction information, which implicates privacy and human rights concerns. The article does not mention any realized harm or incidents resulting from the system's use yet, but the potential for harm is credible and significant, including wrongful accusations, discrimination, or violation of rights. Hence, it is an AI Hazard rather than an AI Incident. The involvement is in the development and intended use of the AI system, with plausible future harm to individuals' rights and freedoms.
Thumbnail Image

ايتنا - دولت بریتانیا به‌دنبال شناسایی قاتلان آینده با هوش مصنوعی

2025-04-13
ايتنا - سایت خبری تحلیلی فناوری اطلاعات و ارتباطات
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: an algorithm designed to predict future violent crimes. The use of sensitive personal data and the potential for reinforcing systemic biases indicate a plausible risk of violations of human rights and discrimination. Since the project is still in research and no harm has yet occurred, but the system could plausibly lead to significant harm if deployed, this qualifies as an AI Hazard rather than an Incident. The concerns about bias and discrimination align with potential violations of rights under the framework.
Thumbnail Image

پیش‌بینی قتل پیش‌از وقوع با هوش مصنوعی؛ طرح جنجالی در بریتانیا

2025-04-09
زومیت
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithm developed to predict future murders using sensitive police data. The system's use is in development and research, with no reported realized harm yet. However, the article highlights credible concerns about structural bias and discrimination inherent in such predictive policing tools, which could lead to violations of rights and harm if deployed. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving human rights violations and harm to individuals. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information because it focuses on the potential risks and controversy of the AI system's use, not just updates or responses. Therefore, the classification is AI Hazard.
Thumbnail Image

Un algorithme pour deviner qui tuera demain : l'incroyable projet...

2025-04-11
Futura
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system for predictive policing, which could plausibly lead to violations of human rights and discrimination, constituting harm under the framework. Since the harm is not yet realized but the project is ongoing and the risks are credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article does not report actual harm occurring yet, but highlights the plausible future harm from the AI system's use.
Thumbnail Image

Il Regno Unito come "Minority Report": in sviluppo algoritmo per prevenire gli omicidi (ma si basa su dati discriminatori)

2025-04-11
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the article discusses an algorithm designed to predict homicide risk based on data analysis. Although the project is still in research and no harm has yet occurred, the use of discriminatory data and the potential for biased or unjust outcomes could plausibly lead to violations of rights and harm to individuals or communities if implemented. Therefore, this qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving human rights violations and discriminatory harm. It is not an AI Incident yet because no harm has materialized, nor is it merely complementary information since the focus is on the potential risks of the system under development.
Thumbnail Image

Royaume-Uni : vers un programme de prévention de la criminalité par... La prédiction ? - RTBF Actus

2025-04-11
RTBF
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (data analysis and predictive algorithms) used by government authorities to assess risk of violent crime. While the article does not report actual harm occurring, the system's development and intended use could plausibly lead to harms including violations of fundamental rights, discrimination, and privacy breaches. Therefore, it fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident involving human rights violations and harm to individuals or communities.
Thumbnail Image

" Minority Report " dans la vraie vie ? Le gouvernement britannique développe un outil de " prédiction des meurtres "

2025-04-11
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of an AI algorithm for predicting murders, involving personal data of various individuals, including those never convicted. The system's predictive use in criminal justice directly implicates potential violations of human rights and discrimination, which are recognized harms. The involvement of AI in profiling and risk assessment in this sensitive domain, combined with expert criticism highlighting structural discrimination, indicates realized or imminent harm. Hence, this is an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Un outil de " prédiction des meurtres " : le gouvernement britannique développe un système pour identifier les futurs tueurs

2025-04-09
leparisien.fr
Why's our monitor labelling this an incident or hazard?
The described system is an AI system as it uses data-driven predictive analytics to identify individuals at risk of committing violent crimes. The project is currently in research phase, so no direct harm has been reported yet. However, the use of personal data and risk prediction in law enforcement contexts raises credible concerns about potential future harms such as violations of human rights, privacy breaches, or discriminatory outcomes. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if deployed without adequate safeguards.
Thumbnail Image

Une IA pour prédire les meurtres : quelle est cette technologie terrifiante "secrètement" développée par le gouvernement britannique ?

2025-04-11
CNEWS
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly described as being developed to predict murders using detailed personal data, including sensitive health and demographic information. The system's use could plausibly lead to violations of human rights and discrimination within the justice system, which are harms under the AI Incident definition. However, since the system is reportedly still in research and not yet causing realized harm, the event is best classified as an AI Hazard. The article highlights credible concerns about intrusive data use and potential discriminatory outcomes, indicating plausible future harm if the AI is deployed.
Thumbnail Image

"Minority Report" : le gouvernement britannique accusé d'élaborer un outil pour prédire les meurtres

2025-04-11
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (an algorithm predicting potential murderers) by the UK government. The system uses personal and sensitive data, which implicates privacy and human rights concerns. While no direct harm is reported as having occurred, the nature of the AI system and its intended use plausibly pose risks of significant harm, including violations of rights and potential wrongful targeting of individuals. Therefore, this qualifies as an AI Hazard under the framework, as it could plausibly lead to an AI Incident involving harm to individuals' rights and freedoms.
Thumbnail Image

"Minority Report" : le gouvernement britannique accusé d'élaborer un outil pour prédire les meurtres

2025-04-11
France 24
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved: an algorithm predicting potential murderers based on personal and health data. The system is under development and use for research, not yet causing direct harm, but the article highlights credible risks of future harms including human rights violations and discriminatory profiling. The use of sensitive data and the potential for biased outcomes affecting minorities and vulnerable populations indicate plausible future harm. Since no actual harm has been reported yet, but the risk is credible and significant, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Êtes-vous susceptible de commettre un meurtre? Le gouvernement britannique pourrait bientôt le savoir

2025-04-10
Geo.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system developed to predict murder risk using personal and sensitive data, including health and demographic information. The system's use in law enforcement and probation contexts directly implicates potential violations of human rights and privacy, which are recognized harms under the AI Incident definition. The involvement of AI in profiling individuals, some without criminal records, and the use of sensitive data such as health markers, indicate a direct or indirect role in causing harm through discrimination or unjust treatment. The project is already operational in research form, implying realized use rather than mere potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Le gouvernement britannique accusé de mettre au point un système secret de "prédiction des meurtres" par IA

2025-04-10
7sur7.be
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing personal data to predict future crimes, which fits the definition of an AI system. The use of this AI system has directly led to harms including violations of fundamental rights (privacy, presumption of innocence), and likely discriminatory impacts on vulnerable populations, fulfilling criteria for an AI Incident. The article reports the system is already in use with data from hundreds of thousands of individuals, confirming realized harm rather than hypothetical risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Le Royaume-Uni teste une intelligence artificielle pour prédire les futurs auteurs de meurtres

2025-04-09
parismatch.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as an algorithmic predictive tool for homicide risk assessment. The event concerns the development and intended use of this AI system. While no direct harm has yet occurred, the system's use of sensitive data and the nature of predictive policing create a credible risk of violations of human rights and discrimination, which are harms under the framework. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving rights violations and harm to communities. It is not an AI Incident yet since harm is not reported as realized, nor is it merely complementary information or unrelated news.
Thumbnail Image

Le Royaume-Uni crée un outil de " prédiction de meurtre " pour identifier les personnes les plus susceptibles de tuer

2025-04-09
informaticien.be
Why's our monitor labelling this an incident or hazard?
The event describes the development and intended use of an AI system for predictive policing that analyzes sensitive personal data to identify individuals at risk of committing violent crimes. While no direct harm is reported yet, the nature of the system and its use of sensitive data could plausibly lead to violations of human rights and other harms. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving rights violations and harm to individuals or communities.
Thumbnail Image

Prima dell'uomo morto. Il governo britannico testa un algoritmo per "prevedere gli omicidi"

2025-04-09
HuffPost Italia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (predictive algorithm analyzing criminal data) in its development and testing phase. Although no direct harm has occurred yet, the potential for significant harm exists, such as biased predictions leading to unfair treatment or rights violations. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident in the future. The article does not report any realized harm or incident, so it is not an AI Incident. It is more than just complementary information because it highlights a credible risk from the AI system's deployment.
Thumbnail Image

Des algorithmes pour prédire les meurtres ? Le projet controversé du gouvernement britannique | TF1 INFO

2025-04-09
TF1 INFO
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved: an algorithm analyzing large datasets to predict violent behavior. Although the project is currently only for research and no harm has yet occurred, the use of sensitive data and the potential for reinforcing structural discrimination and privacy violations present credible risks of harm to individuals and communities. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to violations of rights and harm if deployed operationally.
Thumbnail Image

Regno Unito. Il governo testa un algoritmo per "prevedere gli omicidi"

2025-04-09
avvenire.it
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the government is testing an algorithm for crime prediction. Although the project is currently a research test and no direct harm has been reported, the nature of the system and the concerns raised indicate a credible risk of future harm, including systemic discrimination and privacy violations. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights and harm to communities if implemented improperly. It is not an AI Incident yet, as no harm has materialized, nor is it merely complementary information or unrelated.
Thumbnail Image

Il Regno Unito come Minority Report, vuole identificare i potenziali assassini prima che agiscano

2025-04-09
Wired Italia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (data science techniques for predictive policing) in its development and intended use to forecast potential future murders. While no actual harm has been reported yet, the system's operation could plausibly lead to violations of human rights and discriminatory harm to vulnerable groups, fitting the definition of an AI Hazard. The article describes a credible risk of harm stemming from the AI system's use, but no direct or indirect harm has materialized yet, so it is not an AI Incident. It is not merely complementary information, as the focus is on the potential risks of the AI system's deployment, nor is it unrelated.
Thumbnail Image

Le Royaume-Uni développe un programme de prédiction des meurtres - Next

2025-04-09
Next
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of an AI system for homicide risk prediction using personal and sensitive data. While no actual harm has been reported, the nature of the system and its intended use imply a credible risk of harm, including violations of rights and potential stigmatization or wrongful targeting. Since the project is still in research and no harm has materialized, it fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI in data-driven risk prediction and the potential for significant societal harm justify this classification.
Thumbnail Image

Il Regno Unito sta sperimentando un software per prevedere chi può commettere un omicidio

2025-04-09
la Repubblica
Why's our monitor labelling this an incident or hazard?
The event describes the development and use of an AI system for homicide prediction, involving personal data analysis and risk assessment algorithms. While no direct harm has been reported yet, the nature of the system and its application in criminal justice pose credible risks of harm, including biased outcomes and violations of rights. The project is still in research phase, so harm is potential rather than realized. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Minority Report è realtà: nasce l'IA che identifica i futuri assassini

2025-04-09
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing personal and sensitive data to predict future violent crimes, which is a direct use of AI for decision-making impacting individuals' rights and freedoms. The project raises ethical and legal concerns about privacy, potential discrimination, and the risk of wrongful profiling, which are violations of human rights and fundamental freedoms. Even if the project is currently in a research phase, the deployment of such AI systems for predictive policing has already led to harms in other contexts and is widely recognized as a source of significant societal harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in activities causing or likely causing violations of rights and harm to communities.
Thumbnail Image

Come "prevedere" gli omicidi: l'algoritmo britannico che sacrifica i dati personali per la sicurezza

2025-04-09
Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithm designed to predict future murders by analyzing personal data. Although the project is currently in a research phase and not yet operational, the use of sensitive personal data, including from innocent individuals, and the potential for biased predictions against minorities and the poor, indicate a credible risk of violations of human rights and privacy. No actual harm has been reported yet, but the plausible future harms align with the definition of an AI Hazard. The article does not describe any realized harm or incident, so it cannot be classified as an AI Incident. It is more than just complementary information because it reveals a significant AI-related program with potential for harm.
Thumbnail Image

Regno Unito, al via un progetto per "prevedere" gli omicidi. L'accusa delle ong: "Programma distopico"

2025-04-08
Open
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the project uses data-driven algorithms to predict potential offenders, fitting the definition of an AI system. The project is still experimental, so no direct harm has occurred yet. However, the use of personal data and predictive policing raises credible risks of human rights violations and biased outcomes, which could plausibly lead to harm in the future. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"Così preveniamo gli omicidi". Arriva l'algoritmo per individuare i potenziali killer

2025-04-08
ilGiornale.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an algorithm analyzing personal data to predict potential killers, fitting the definition of an AI system. The system's use (development and deployment) directly leads to harms including violations of human rights and discrimination against minorities and vulnerable populations, as highlighted by critics and the nature of the data used. The profiling and predictive policing approach has already raised ethical and legal concerns, indicating realized harm rather than just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Minority Report" en vrai: le Royaume-Uni veut prévenir des homicides grâce à l'intelligence artificielle

2025-04-13
rts.ch
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the article discusses an AI-based predictive policing program. The system is in development and research use, so no direct harm has yet occurred, but the potential for harm is significant and plausible, including biased discrimination against racialized and low-income communities, which constitutes violations of rights and harm to communities. The article highlights these risks and the dystopian nature of such predictive policing. Since harm is not yet realized but plausible, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their societal impact, so it is not Unrelated.
Thumbnail Image

Royaume-Uni : le gouvernement créé un outil de "prédiction de meurtres"

2025-04-09
Le Figaro
Why's our monitor labelling this an incident or hazard?
The event describes the development and use of an AI system for predicting murders, which is a clear AI system involvement. While no direct harm is reported yet, the system's use could plausibly lead to violations of human rights and other harms, fitting the definition of an AI Hazard. Since the harm is potential and not yet realized, it is not an AI Incident. The article focuses on the development and potential implications rather than a realized harm or a response to a past incident, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Gran Bretagna, se il reato lo prevede l'algoritmo

2025-04-12
InsideOver
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI algorithms analyzing personal and sensitive data to predict criminal behavior, which directly implicates violations of human rights and legal protections. The AI system's use in predictive policing and risk assessment can lead to discriminatory outcomes and breaches of privacy and fundamental rights. The article reports on an active program with data already processed and used, not merely a potential risk, thus constituting an AI Incident rather than a hazard or complementary information. The involvement of AI in the development and use phases, combined with the direct link to rights violations, justifies classification as an AI Incident.