Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as the sum of absolute errors divided by the sample size.

 

MAE connects to 'Safety' and 'Robustness' because lower prediction errors can reduce the risk of harmful or unreliable outputs, supporting safer and more dependable AI system behavior. By quantifying how much predictions deviate from actual values, MAE helps monitor and improve the system's reliability under normal conditions (robustness) and can be used to set thresholds that prevent unsafe outputs (safety). However, MAE does not address the broader aspects of these objectives, such as handling adversarial conditions or providing explanations for errors.

About the metric





Target users:


Risk management stage(s):

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.