Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

The metric GFIS is based on the concept of entropy. More precisely on the entropy of the normalized features measure, which represents the concentration of information within a set of features. Lower entropy values indicate that the majority of the explanation is concentrated in a small number of features, suggesting that the model could be potentially explained more simply. Conversely, higher entropy values suggest that the importance of the features is more evenly distributed, resulting in a decreased level of explainability. The highest degree of complexity in explainability occurs when all feature weights are equal, is in a uniform distribution, which can serve a benchmark entropy for comparison.
On the assumption that a lower entropy indicates a more explainable model, a feature importance distribution P(F) can be compared to a baseline uniform distribution U to measure its level of explainability. However, simply comparing these two high dimensional objects is often not easy to humans, so we propose a form to summarize the differences into a single number, i.e. the spread of the two distributions can be measured using: (i) the Entropy Ratio; or (ii) the Kullback-Leibler Divergence; or (iii) the Gini coefficient. These metrics allow for a direct comparison between the two distributions, with the goal of determining the degree of explainability with a single measure.

Please refer to the reference website to access the full formula.

About the metric


Objective(s):


Lifecycle stage(s):


Target users:


Risk management stage(s):

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.