Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

The Absolute Relative Error (ARE) is a metric that is used to evaluate the accuracy of a measurement or calculation. It measures the absolute difference between the true value and the estimated value, normalized by the true value. The formula for the absolute relative error is given by:

ARE = |(true value - estimated value) / true value|

The absolute relative error is commonly used because it provides a measure of accuracy that is independent of the scale of the true value. This makes it useful for comparing the accuracy of measurements or calculations made on data with widely different scales.

In many applications, the absolute relative error is expressed as a percentage, by multiplying the ARE by 100. This provides a convenient way to express the accuracy of a measurement or calculation in terms of a percentage of the true value.

Overall, the absolute relative error is a widely used metric in many fields, including engineering, science, and mathematics, to quantify the accuracy of measurements and calculations and to compare the performance of different methods or algorithms.

References

About the metric





Risk management stage(s):


Github stars:

  • 7100

Github forks:

  • 720

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.