These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Local Feature Importance refers to the assignment of feature normalized importance to different regions of the input data space. For a given dataset D with N samples, it is possible to compute a vector of feature importance for each individual observation d. With observation-individualized measures, we create a feature importance matrix where each row represents the normalized feature importance for each instance in the data space. Similar to the concept of global feature importance, the distribution associated with an instance d is represented by the d-th row of our matrix. In comparison to the GFIS metric, we now face a collection of vectors, as opposed to only one vector. However, we still aim to collapse all these vectors into a single straightforward and intuitive measure that reflects the complexity regarding how features contribute to the final outcome, relative to a benchmark model in which all features contribute equally – i.e. to a uniform feature importance distribution.
Please refer to the reference website to access the full formula.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):