Law enforcement & disaster management

AI is increasingly used by public agencies to identify threats, assess risk and respond to emergencies. In fields like policing, disaster response and border control, AI enables faster analysis, anticipatory action and smarter resource deployment. However, because these applications often involve high-stakes decisions, public institutions must take extra care to ensure that AI is used in ways that are transparent, accountable and rights-respecting.

The current state of play

Governments are applying AI across a range of law enforcement and risk governance tasks:

  • Identifying criminal suspects and missing persons. Facial recognition tools are used to match surveillance images to databases, assisting in identifying suspects or missing people. Systems such as the FBI’s Next Generation Identification provide biometrics-based leads, but raise concerns about privacy and potential skewed outcomes.
  • Anticipating criminal behaviour and identifying risk factors. Predictive tools analyse behavioural patterns, social networks and travel data to flag potential threats. These systems assist with border screening and counter-terrorism, but must be carefully scrutinised to avoid reinforcing discriminatory patterns.
  • Predicting times and locations at risk of criminal activity. AI models use historic crime data to identify likely hotspots and guide patrol planning. Real-time adjustments, such as Korea’s stalking prevention system, aim to prevent harm before it occurs.
  • Enhancing disaster risk identification and anticipation. AI is being used to forecast wildfires, earthquakes and other natural hazards. It supports better planning and faster mobilisation of emergency resources, as seen in Canada and Brazil.
  • Improving risk assessment and predictive modelling. AI helps simulate scenarios and prioritise mitigation actions. Tools like Singapore’s Virtual Singapore allow cities to test and refine disaster response strategies based on complex modelling.

These uses are advancing quickly — but public trust hinges on effective oversight, ethical use of data, and maintaining human judgement in decisions that affect people’s rights and safety.

Examples from practice

  • Germany: Risk assessment for potential extremists. The RADAR-iTE tool helps German authorities assess the threat level of individuals known to law enforcement by combining standardised risk indicators with professional judgement.
  • Brazil: Preventing blackouts through wildfire detection. Brazil’s electricity regulator uses AI to monitor vegetation near power lines and issue early warnings — helping prevent wildfires and reduce energy outages.
  • United States: AI for post-disaster damage assessment. The US Federal Emergency Management Agency (FEMA) uses AI tools to process satellite imagery and assess structural damage after hurricanes, reducing time to recovery and focusing resources where they’re needed most.
  • Korea: AI-powered CCTV for crime and emergency detection. AI-enhanced surveillance systems identify unusual behaviour (e.g. loitering, fighting) in real time, helping dispatch emergency services faster while anonymising sensitive data.
  • Singapore: Virtual twin for crisis planning. Singapore’s “Virtual Singapore” 3D model integrates real-time data to simulate scenarios, model flood risks and optimise emergency response.

Untapped potential and the way forward

AI can improve public safety by enhancing foresight, emergency response and law enforcement. It can support faster disaster interventions and help detect threats, but must be used with caution due to risks and the sensitive nature of the work and its potential outcomes. Governments should ensure transparent design, public engagement and strong evaluation and oversight. AI must remain a tool, not a replacement for accountable human judgement, especially in high-stakes areas.

Learn more

Review a detailed section on AI in law enforcement and disaster risk management here.