These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Multi-object tracking accuracy (MOTA) shows how many errors the tracker system has made in terms of misses, false positives, mismatch errors, etc. Therefore, it can be derived from three error ratios: the ratio of misses, the ratio of false positives, and the ratio of mismatches over all the frames.
MOTA supports 'Safety' by quantifying the accuracy of object tracking, which is essential for preventing harmful outcomes in safety-critical applications (e.g., avoiding collisions in autonomous driving). It also supports 'Robustness' by providing a measurable indicator of how well the system maintains tracking performance under various conditions, helping to identify and mitigate potential failures.
Trustworthy AI Relevance
This metric addresses Robustness by quantifying relevant system properties. Robustness: MOTA quantifies a tracker's ability to maintain correct detections and identities under challenging conditions (occlusions, crowded scenes, sensor noise, distribution shifts). As a composite accuracy measure, it is useful for assessing consistency and reliability of tracking systems across scenarios and for detecting performance degradation, making it directly relevant to robustness. Data Governance & Traceability: One of MOTA's components penalizes ID switches and missed/false tracks, which relates to the ability to maintain persistent object identities over time.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Lifecycle stage(s):
Target users:


























