Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Context Entities Recall measures the recall of entities in retrieved contexts based on the entities present in both the reference and retrieved contexts, relative to the entities in the reference alone. This metric evaluates what fraction of entities in the reference context are also present in the retrieved contexts. It is particularly valuable for fact-based applications such as tourism help desks, historical question answering, and other scenarios where capturing entity details accurately is crucial. By comparing entities in the reference to those in retrieved contexts, Context Entities Recall helps assess the effectiveness of retrieval mechanisms for entity-based information.

 

Formula:

Context Entity Recall = 
(Number of Entities in Intersection of GE and CE) / (Total Number of Entities in GE)

 

where:

 

GE (Ground Truth Entities): The set of entities present in the reference context.

CE (Context Entities): The set of entities present in the retrieved contexts.

 

This formula captures the proportion of reference entities accurately recalled by the retrieved contexts.

 

Types of Context Entities Recall Approaches:

 

1. High Entity Recall Contexts: Retrieved contexts that cover most entities from the ground truth, indicating better recall and relevance.

2. Low Entity Recall Contexts: Retrieved contexts with fewer overlapping entities from the ground truth, suggesting lower recall effectiveness.

 

This metric is useful in retrieval systems where the presence of specific entities greatly affects response quality, supporting applications that demand high entity accuracy.

References

About the metric


Metric type(s):









Github stars:

  • 7100

Github forks:

  • 720

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.