Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Police Early Intervention System (EIS)



Police Early Intervention System (EIS)

Build Status
Documentation Status

This is a data-driven Early Intervention System (EIS) for police departments. The system uses a police department’s data to predict which officers are likely to have an adverse interaction with the public. An adverse incident can be defined on a department-by-department basis, but typically includes unjustified uses of force, officer injuries, preventable accidents, and sustained complaints. This is done such that additional training, counseling, and other resources can be provided to the officer before any adverse interactions occur.

How to Run the Pipeline

The pipeline has two main configurations. In the modeling configuration, there are three distinct steps.

Build features

python3 -m eis.run --config officer_config.yaml --labels labels_config.yaml --buildfeatures

In this stage, features and labels are built for the time durations specified in the config files.
Features are stored in the schema of the feature defined by schema_feature_blocks in the config file. Labels are stored in the table specified by  officer_labels_table in the config.

Generate matrices

python3 -m eis.run --config officer_config.yaml --labels labels_config.yaml --generatematrices

All possible configurations of the train/test splits are saved. They are saved in the directory specified by project_path in the config.

Run models

python3 -m eis.run --config officer_config_collate_daily.yaml --labels labels_config.yaml

Running the pipeline with no flags will complete the modeling run. The pipeline first checks to see if the feature building and matrix generation stages have been completed. If not, these processes are run before the modeling run of the pipeline.

The results schema is populated in this stage. The schema includes the tables:

  • evaluations: metrics and values for each model (ex. precision@100)
  • experiments: stores the config (JSON) for each experiment hash
  • feature_importances: for each model, gives feature importance values as well as rank (abs and pct)
  • individual_importances: stores 5 risk factors for each officer per model
  • model_groups: feature list, model config, model parameters
  • models: stores all information pertinent to each model
  • predictions: for each model, stores the risk scores per officer

After the model runs are completed and a model is picked, the production setup lets the user run a specific model group and score the list of active officers for a provided date.

python3 -m eis.run --config officer_config.yaml --labels labels_config.yaml --production --modelgroup 5709 --date 2015-02-22

The production schema will be populated at this stage. The schema includes the tables:

  • models: information about the models run
  • feature_importances: for each model, gives feature importance values as well as rank (abs and pct)
  • individual_importances: gives five risk factors contributing to an officer’s risk score at a given date
  • predictions: gives the risk score for each officer per model
  • time_delta: shows the changes in risk score for the officers over time

Quickstart Documentation

Our modeling pipeline has some prerequisites and structure documentation:

  1. Configure the Machine.
  2. Documentation about the structure and contents of the repositories.
  3. Setup Database Connection.

After the prerequisites and requirements are met, the full pipeline can be run (pipeline documentation).

Process

Once the pipeline has been run, the results can be visualized using the webapp.

Deprecated Documentation Quick Links:

Issues

Please use Github’s issue tracker to report issues and suggestions.

Contributors

  • 2016: Tom Davidson, Henry Hinnefeld, Sumedh Joshi, Jonathan Keane, Joshua Mausolf, Lin Taylor, Ned Yoxall, Joe Walsh (Technical Mentor), Jennifer Helsby (Technical Mentor), Allison Weil (Project Manager)
  • 2015: Jennifer Helsby, Samuel Carton, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh (Technical Mentor), Lauren Haynes (Project Manager).

About the tool


Developing organisation(s):


Tool type(s):


Objective(s):


Country of origin:


Type of approach:



Usage rights:


License:


Programming languages:



Github stars:

  • 46

Github forks:

  • 18

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.