These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
We propose a set of interrelated metrics, all based on the notion of AI output concentration, and the related Lorenz curve/Lorenz area under the curve, able to measure the Sustainability/robustness, Accuracy, Fairness/privacy, Explainability/accountability of any AI application. All measures are normalised between 0 and 1 and can be easily calculated and integrated
Trustworthy AI Relevance
This metric addresses Safety and Data Governance & Traceability by quantifying relevant system properties. Safety: In finance, 'SAFE AI' is typically intended to measure and reduce concrete harms (financial losses, market instability, discriminatory lending, erroneous automated decisions). When operationalized, it can include harm-weighted error rates, safety-constraint violation counts, and incident rates — direct safety signals that indicate prevention or occurrence of harm.
References
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Target sector(s):
Usage rights:



























