Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.



License Python Version Package Version Build Status Coverage LGTM Grade Maintenance

In the beginning machines learned in darkness, and data scientists struggled in the void to explain them.

Let there be light.

InterpretML is an open-source package that incorporates state-of-the-art machine-learning interpretability techniques under one roof. With this package, you can train interpretable glass box models and explain black box systems. InterpretML helps you understand your model’s global behavior, or understand the reasons behind individual predictions.

Interpretability is essential for:

  • Model debugging – Why did my model make this mistake?
  • Feature Engineering – How can I improve my model?
  • Detecting fairness issues – Does my model discriminate?
  • Human-AI cooperation – How can I understand and trust the model’s decisions?
  • Regulatory compliance – Does my model satisfy legal requirements?
  • High-risk applications – Healthcare, finance, judicial, …


Python 3.6+ | Linux, Mac, Windows

pip install interpret

Introducing the Explainable Boosting Machine (EBM)

EBM is an interpretable model developed at Microsoft Research*. It uses modern machine learning techniques like bagging, gradient boosting, and automatic interaction detection to breathe new life into traditional GAMs (Generalized Additive Models). This makes EBMs as accurate as state-of-the-art techniques like random forests and gradient-boosted trees. However, unlike these black box models, EBMs produce exact explanations and are editable by domain experts.

Dataset/AUROC Domain Logistic Regression Random Forest XGBoost Explainable Boosting Machine
Adult Income Finance .907±.003 .903±.002 .927±.001 .928±.002
Heart Disease Medical .895±.030 .890±.008 .851±.018 .898±.013
Breast Cancer Medical .995±.005 .992±.009 .992±.010 .995±.006
Telecom Churn Business .849±.005 .824±.004 .828±.010 .852±.006
Credit Fraud Security .979±.002 .950±.007 .981±.003 .981±.003

Notebook for reproducing table

Supported Techniques

Interpretability Technique Type
Explainable Boosting glassbox model
Decision Tree glassbox model
Decision Rule List glassbox model
Linear/Logistic Regression glassbox model
SHAP Kernel Explainer blackbox explainer
LIME blackbox explainer
Morris Sensitivity Analysis blackbox explainer
Partial Dependence blackbox explainer

Train a glass box model

Let’s fit an Explainable Boosting Machine

from interpret.glassbox import ExplainableBoostingClassifier

ebm = ExplainableBoostingClassifier(), y_train)

# or substitute with LogisticRegression, DecisionTreeClassifier, RuleListClassifier, ...
# EBM supports pandas dataframes, numpy arrays, and handles "string" data natively.

Understand the model

from interpret import show

ebm_global = ebm.explain_global()

Global Explanation Image

Understand individual predictions

ebm_local = ebm.explain_local(X_test, y_test)

Local Explanation Image

And if you have multiple model explanations, compare them

show([logistic_regression_global, decision_tree_global])

Dashboard Image

If you need to keep your data private, use Differentially Private EBMs (see DP-EBMs)

from interpret.privacy import DPExplainableBoostingClassifier, DPExplainableBoostingRegressor

dp_ebm = DPExplainableBoostingClassifier(epsilon=1, delta=1e-5) # Specify privacy parameters, y_train)

show(dp_ebm.explain_global()) # Identical function calls to standard EBMs

For more information, see the documentation.


InterpretML was originally created by (equal contributions): Samuel Jenkins, Harsha Nori, Paul Koch, and Rich Caruana

EBMs are fast derivative of GA2M, invented by: Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker

Many people have supported us along the way. Check out!

We also build on top of many great packages. Please check them out!

plotly | dash | scikit-learn | lime | shap | salib | skope-rules | treeinterpreter | gevent | joblib | pytest | jupyter


“InterpretML: A Unified Framework for Machine Learning Interpretability” (H. Nori, S. Jenkins, P. Koch, and R. Caruana 2019)
  title={InterpretML: A Unified Framework for Machine Learning Interpretability},
  author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich},
  journal={arXiv preprint arXiv:1909.09223},

Paper link

Explainable Boosting
Differential Privacy
Sensitivity Analysis
Partial Dependence
Open Source Software


External links

Papers that use or compare EBMs

Books that discuss EBMs

External tools

Contact us

There are multiple ways to get in touch:

About the tool

Developing organisation(s):

Tool type(s):

Type of approach:

Github stars:

  • 4641

Github forks:

  • 586

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at