These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Trustworthy AI Relevance
This metric addresses Explainability and Fairness by quantifying relevant system properties. Explainability: Log odds-ratio quantifies the strength and direction of association between features (or tokens) and model outputs, producing compact, interpretable evidence used in explanations and feature-level attribution. Fairness: By computing and comparing odds-ratios across demographic or protected subgroups (or computing differential log-odds), the metric helps surface disparate associations that may indicate bias or unequal treatment — making it a practical diagnostic for fairness assessments.
References
Ragas Documentation: Context Precision
About the metric
You can click on the links to see the associated metrics
Objective(s):
Github stars:
- 7100
Github forks:
- 720


























