UK Government Launches AI-Powered Crime Prediction Initiative

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK government has announced plans to develop an AI-driven crime mapping system to predict and prevent crimes such as knife crime and anti-social behavior. The system will integrate data from police, councils, and social services, raising potential future risks of privacy violations and profiling, though no harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and intended use of an AI system to predict crime locations, involving data integration from police and social services. Although no incident of harm has yet occurred, the concerns raised about unfair profiling and lack of detailed safeguards indicate credible risks of harm to communities and potential rights violations. Since the AI system is still in development and the harms are potential rather than realized, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the planned AI system and its plausible risks, not on responses or updates to past incidents.[AI generated]
AI principles
Privacy & data governanceFairnessRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Compliance and justiceICT management and information security

AI system task:
Forecasting/predictionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

UK government to use AI to predict crime locations by 2030

2025-08-15
Neowin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and intended use of an AI system to predict crime locations, involving data integration from police and social services. Although no incident of harm has yet occurred, the concerns raised about unfair profiling and lack of detailed safeguards indicate credible risks of harm to communities and potential rights violations. Since the AI system is still in development and the harms are potential rather than realized, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the planned AI system and its plausible risks, not on responses or updates to past incidents.
Thumbnail Image

AI Aids Police in Preemptive Crime Prevention

2025-08-14
Mirage News
Why's our monitor labelling this an incident or hazard?
The article outlines a government challenge to create an AI system for crime prediction and prevention that is not yet operational and does not describe any realized harm or incidents caused by the AI system. The AI system's use is intended to prevent harm rather than cause it. Therefore, this event represents a plausible future risk scenario where AI could lead to harm if misused or if the system causes unintended consequences, but no actual harm has occurred yet. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Government to deploy AI to catch criminals before they strike - UKTN

2025-08-15
UKTN (UK Tech News)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI technology to predict and map violent crime occurrences before they happen, which fits the definition of an AI system. The system is currently in development with a prototype expected by 2026, so no direct harm has yet occurred. However, predictive policing AI systems have well-documented risks including potential violations of human rights, privacy infringements, biased or discriminatory outcomes, and harm to communities through over-policing or wrongful suspicion. Given these credible risks and the government's active push to deploy such technology, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights or harm to communities in the future. It is not an AI Incident yet, as no harm has materialized, nor is it merely complementary information or unrelated.
Thumbnail Image

AI to help police catch criminals before they strike

2025-08-15
caithness-business.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI to analyze data from multiple sources to predict and prevent crime. While no actual harm has yet occurred due to this AI system, the initiative involves the development and intended use of AI for crime prevention. The article focuses on the potential benefits and the planned deployment of this AI system in the future. Since the AI system is not yet operational and no harm has been reported, but there is a plausible risk that the AI system's use could lead to harms such as privacy violations, profiling, or other unintended consequences, this event qualifies as an AI Hazard. It is not an AI Incident because no harm has yet materialized, and it is not Complementary Information or Unrelated because the main focus is on the AI system's development and its plausible future impact on crime prevention and public safety.
Thumbnail Image

AI to help police catch criminals before they strike | Department for Science, Innovation & Technology

2025-08-15
WiredGov
Why's our monitor labelling this an incident or hazard?
The article discusses the planned development and future use of an AI system for crime prediction and prevention. While no harm has yet occurred, the AI system's deployment could plausibly lead to significant impacts on public safety by preventing crime, which is a positive harm reduction. However, the article does not report any current harm or incident caused by the AI system, nor does it describe any malfunction or misuse. Therefore, this event represents a plausible future risk and benefit scenario involving AI, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and its potential societal impact.
Thumbnail Image

AI to Help Police Catch Criminals Before They Strike

2025-08-15
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being developed and deployed to predict and prevent crime, including knife crime and violence in prisons. Although the AI systems are intended to reduce harm, their use involves processing sensitive data and predictive analytics that could plausibly lead to harms such as privacy violations, wrongful targeting, or other unintended consequences. Since no actual harm is reported yet, but the AI systems' use could plausibly lead to harm, this fits the definition of an AI Hazard rather than an AI Incident. The article also discusses the broader context and government initiatives, which supports the classification as a hazard rather than complementary information or unrelated news.
Thumbnail Image

AI to help police catch criminals before they strike

2025-08-14
GOV.UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI to analyze and integrate data from police, councils, and social services to predict and prevent crime. While no harm has yet occurred, the AI system's deployment is intended to influence policing and public safety, which could plausibly lead to either positive outcomes or potential risks (e.g., privacy concerns, misidentification, or misuse). Since the event focuses on the development and planned use of the AI system with potential future impacts but no realized harm or incident, it fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI and its societal implications.
Thumbnail Image

DSIT backs development of AI tool for crime prevention | UKAuthority

2025-08-15
UKAuthority
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and intended use of an AI system for crime prediction and prevention, involving data integration and real-time analysis. While no harm has yet occurred, the deployment of such AI tools in policing could plausibly lead to harms such as violations of rights, discriminatory profiling, or other societal harms if misused or malfunctioning. Since the event focuses on the development and future deployment of the AI system with potential for harm but no realized harm, it fits the definition of an AI Hazard. It is not an AI Incident because no direct or indirect harm has occurred yet, nor is it Complementary Information or Unrelated as it concerns a specific AI system with potential for harm.
Thumbnail Image

UK to use AI to predict theft, knife attacks and violent crimes by 2030

2025-08-16
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article describes the development and planned deployment of an AI system intended to predict crimes before they occur. While the system is not yet operational and no direct harm has been reported, the use of predictive policing AI systems has historically raised concerns about racial bias, privacy violations, and wrongful targeting, which are forms of harm to communities and violations of rights. Given these credible risks and the system's intended use, the event plausibly could lead to an AI Incident in the future. Therefore, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Minority Report: Now with more spreadsheets and guesswork

2025-08-16
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems for predictive policing, which could plausibly lead to harms such as violations of human rights, profiling, bias, and injustices as highlighted by privacy campaigners. Since the system is not yet operational and no harm has been reported, but credible concerns about potential future harms exist, this qualifies as an AI Hazard. The article focuses on the potential risks and societal implications rather than reporting an actual incident of harm.
Thumbnail Image

UK is building crime-fighting AI, claims it can spot crimes before they even happen

2025-08-18
India Today
Why's our monitor labelling this an incident or hazard?
The article describes the development and planned deployment of an AI system designed to predict crimes before they occur, which involves AI system use. Although no actual harm has yet occurred, the system's intended use in predictive policing has a credible risk of leading to harms such as violations of human rights, biased policing, and community harm, as seen in previous similar projects. Since the harm is plausible but not yet realized, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI systems.
Thumbnail Image

UK Developing AI Crime Map to Predict and Prevent Offences Before They Happen

2025-08-18
The Hans India
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system for predictive policing, which can plausibly lead to violations of human rights (e.g., biased policing, discrimination) and harm to communities if the system malfunctions or is misused. Since the system is not yet deployed and no harm has occurred, it qualifies as an AI Hazard rather than an AI Incident. The article also references past failures and ethical concerns, reinforcing the potential for future harm.
Thumbnail Image

UK govt building on live facial recognition, announces predictive policing prototype | Biometric Update

2025-08-18
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: live facial recognition and AI-powered predictive policing maps. The announcement concerns the development and future use of these systems, which could plausibly lead to harms including violations of privacy, human rights, and potential discriminatory policing practices. No actual harm or incident is reported yet, but the planned widespread deployment and the nature of the technology imply credible risks. Therefore, this is best classified as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the event.