Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

18717 citations of this metric

Shapley Additive Explanations (SHAP) is a method that quantifies the contribution of each feature to the output of a predictive model. Rooted in cooperative game theory, SHAP values provide a theoretically sound approach for interpreting complex models by distributing the prediction difference fairly among the input features. The method introduces a novel class of additive feature importance measures and demonstrates that there exists a unique solution within this class that satisfies a set of desirable properties, such as local accuracy, consistency, and feature independence.

About the metric





Lifecycle stage(s):



Risk management stage(s):


Github stars:

  • 11500

Github forks:

  • 1800

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.