These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Mean reciprocal rank (MRR) measures the number of triples predicted correctly. If the first predicted triple is correct, then 1 is added, if the second is correct, 1/2 is summed, and so on. MRR is generally used to quantify the effect of search algorithms.
MRR can indirectly support Robustness by surfacing drops in ranking quality under distribution shifts or adversarial probes—sudden declines in mean reciprocal rank act as early warnings that the system may be faltering under new or noisy inputs.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
