These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement. MAE is calculated as the sum of absolute errors divided by the sample size.
MAE connects to 'Safety' and 'Robustness' because lower prediction errors can reduce the risk of harmful or unreliable outputs, supporting safer and more dependable AI system behavior. By quantifying how much predictions deviate from actual values, MAE helps monitor and improve the system's reliability under normal conditions (robustness) and can be used to set thresholds that prevent unsafe outputs (safety). However, MAE does not address the broader aspects of these objectives, such as handling adversarial conditions or providing explanations for errors.
Trustworthy AI Relevance
This metric addresses Robustness and Transparency by quantifying relevant system properties. Robustness: MAE quantifies a model's average prediction error and therefore directly supports assessment of reliability and performance consistency. Tracking MAE across time, environments, noise levels, or held-out distributions reveals performance degradation under adverse conditions and helps evaluate resilience to distribution shift or noisy inputs.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):



























