These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
False rejection rate (FRR) is a security metric used to measure the performance of biometric systems such as voice recognition, fingerprint recognition, face recognition, or iris recognition. It represents the likelihood of a biometric system mistakenly rejecting an authorized user as an impostor. In other words, it measures the number of times that the biometric system incorrectly fails to match a sample from an enrolled user to their corresponding template. For example, if a biometric system has a FRR of 1 percent, it means that out of every 100 attempts by legitimate users to gain access, the system will reject one legitimate user. A lower FRR indicates a more user-friendly biometric system, as it reduces the number of times that legitimate users are denied access. It's worth noting that FRR should be balanced against false acceptance rate (FAR), which measures the number of times the biometric system mistakenly accepts an impostor as a legitimate user. A trade-off exists between FAR and FRR since FRR can be minimized by always accepting. The desired tradeoff of the two metrics depends on the specific use case and security requirements. Also, FRRs presume the system will not be tested with degraded or non-representative versions of the enrolled user. If, for example, a person is sick, their FRR will typically be elevated relative to their normal speaking voice.
About the metric
You can click on the links to see the associated metrics
Lifecycle stage(s):
Target users: