These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The Absolute Relative Error (ARE) is a metric that is used to evaluate the accuracy of a measurement or calculation. It measures the absolute difference between the true value and the estimated value, normalized by the true value. The formula for the absolute relative error is given by:
ARE = |(true value - estimated value) / true value|
The absolute relative error is commonly used because it provides a measure of accuracy that is independent of the scale of the true value. This makes it useful for comparing the accuracy of measurements or calculations made on data with widely different scales.
In many applications, the absolute relative error is expressed as a percentage, by multiplying the ARE by 100. This provides a convenient way to express the accuracy of a measurement or calculation in terms of a percentage of the true value.
Overall, the absolute relative error is a widely used metric in many fields, including engineering, science, and mathematics, to quantify the accuracy of measurements and calculations and to compare the performance of different methods or algorithms.
References
About the metric
You can click on the links to see the associated metrics
Objective(s):
Lifecycle stage(s):
Target users:
Github stars:
- 7100
Github forks:
- 720
