Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

4 citations of this metric

Agent Goal Accuracy is a metric used to evaluate the effectiveness of a language model in accurately identifying and achieving a user’s intended goals during an interaction. This binary metric assigns a score of 1 if the AI successfully accomplishes the user’s goal and 0 if it does not. It is particularly valuable in assessing the performance of AI agents in task-oriented dialogues, where the objective is to fulfill specific user requests.

 

Formula:

Agent Goal Accuracy = (Number of Successfully Achieved Goals) / (Total Number of Goals)

Trustworthy AI Relevance

This metric addresses Robustness and Human Agency & Control by quantifying relevant system properties. Robustness: Agent Goal Accuracy quantifies an agent's ability to deliver correct outcomes across tasks and conditions. As a consistency/reliability metric, it helps detect failures under distribution shift, ambiguous inputs, or noisy environments and supports monitoring and improvement of resilience (preferred mapping for general performance metrics).

References

Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.