These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The Normalized Scanpath Saliency was introduced to the saliency community as a simple correspondence measure between saliency maps and ground truth, computed as the average normalized saliency at fixated locations. Unlike in AUC, the absolute saliency values are part of the normalization calculation. NSS is sensitive to false positives, relative differences in saliency across the image, and general monotonic transformations.
Trustworthy AI Relevance
This metric addresses Explainability and Transparency by quantifying relevant system properties. NSS quantifies the fidelity of saliency/attention maps to human gaze data, which directly supports Explainability by providing an objective measure of how well model-generated explanations (saliency maps) align with human attention and thus how trustworthy those explanations are. Because saliency maps are often used to disclose where a model 'looks' when making decisions, NSS also supports Transparency: it provides a reproducible score that helps communicate and audit model behavior.
References
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
Github stars:
- 7100
Github forks:
- 720


























