Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

False rejection rate (FRR) is a security metric used to measure the performance of biometric systems such as voice recognition, fingerprint recognition, face recognition, or iris recognition. It represents the likelihood of a biometric system mistakenly rejecting an authorized user as an impostor. In other words, it measures the number of times that the biometric system incorrectly fails to match a sample from an enrolled user to their corresponding template. For example, if a biometric system has a FRR of 1 percent, it means that out of every 100 attempts by legitimate users to gain access, the system will reject one legitimate user. A lower FRR indicates a more user-friendly biometric system, as it reduces the number of times that legitimate users are denied access. It's worth noting that FRR should be balanced against false acceptance rate (FAR), which measures the number of times the biometric system mistakenly accepts an impostor as a legitimate user. A trade-off exists between FAR and FRR since FRR can be minimized by always accepting. The desired tradeoff of the two metrics depends on the specific use case and security requirements. Also, FRRs presume the system will not be tested with degraded or non-representative versions of the enrolled user. If, for example, a person is sick, their FRR will typically be elevated relative to their normal speaking voice.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.