These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Shapley Additive Explanations (SHAP) is a method that quantifies the contribution of each feature to the output of a predictive model. Rooted in cooperative game theory, SHAP values provide a theoretically sound approach for interpreting complex models by distributing the prediction difference fairly among the input features. The method introduces a novel class of additive feature importance measures and demonstrates that there exists a unique solution within this class that satisfies a set of desirable properties, such as local accuracy, consistency, and feature independence.
About the metric
You can click on the links to see the associated metrics
Metric type(s):
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):
Github stars:
- 11500
Github forks:
- 1800