These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Trustworthy AI Relevance
This metric addresses Explainability and Robustness by quantifying relevant system properties. Explainability: ShapleyVIC produces per‑feature Shapley value distributions (a 'cloud') rather than a single point estimate, so it improves the comprehensibility and fidelity of explanations by revealing the range, central tendency, and uncertainty of feature importance. That helps stakeholders understand how features drive predictions and when explanations are reliable. Robustness: By aggregating Shapley values over multiple re-trainings, subsamples, or model perturbations, ShapleyVIC quantifies the stability of importance scores under data/model variation.
References
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:



























