Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Mean Average Precision (MAP) is a metric used to evaluate object detection models such as Fast R-CNN, YOLO, Mask R-CNN, etc. The mean of average precision(AP) values are calculated over recall values from 0 to 1.

Trustworthy AI Relevance

This metric addresses Robustness, Safety by quantifying relevant system properties. MAP supports the Robustness objective because it quantifies how well an AI model maintains accurate detection or ranking performance, which reflects resilience and reliability under typical conditions. By providing a clear performance benchmark, it indirectly contributes to Safety by helping identify models that perform reliably and thus reduce risks of harmful or erroneous outputs.

Related use cases :

Uploaded on Mar 15, 2024
Language models (LMs) have proven to be powerful tools for psycholinguistic research, but most prior work has focused on purely behavioural measures (e.g., surprisal comparisons). ...


About the metric




Target sector(s):


Lifecycle stage(s):



Risk management stage(s):

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.