Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Multi-object tracking accuracy (MOTA) shows how many errors the tracker system has made in terms of misses, false positives, mismatch errors, etc. Therefore, it can be derived from three error ratios: the ratio of misses, the ratio of false positives, and the ratio of mismatches over all the frames.

MOTA supports 'Safety' by quantifying the accuracy of object tracking, which is essential for preventing harmful outcomes in safety-critical applications (e.g., avoiding collisions in autonomous driving). It also supports 'Robustness' by providing a measurable indicator of how well the system maintains tracking performance under various conditions, helping to identify and mitigate potential failures.

About the metric



Lifecycle stage(s):



Risk management stage(s):

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.