These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient, is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities.
KRCC can support Explainability by providing a quantitative measure of how closely an AI system's outputs match expected or human-generated rankings. This can help users and developers understand whether the AI's decision process aligns with human reasoning or established benchmarks, thereby contributing to more understandable and interpretable AI outputs. However, this connection is indirect and context-dependent, as KRCC does not itself generate explanations but rather evaluates agreement in rankings.
Trustworthy AI Relevance
This metric addresses Robustness and Fairness by quantifying relevant system properties. Robustness: KRCC quantifies the consistency of relative orderings produced by a model across conditions (e.g., model vs human judgments, clean vs noisy/OOD inputs, or before/after perturbation). High Kendall tau indicates stable ranking behavior, so it is useful for assessing resilience to distribution shifts, noise, or model updates and complements other robustness checks. Fairness: Ranking/correlation metrics like KRCC can reveal systematic differences in how items related to different demographic groups are ordered by the model versus a reference (e.g., human experts or a fairness-aware baseline).
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):



























