These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
False acceptance rate (FAR) is a security metric used to measure the performance of biometric systems such as voice recognition, fingerprint recognition, face recognition, or iris recognition. It represents the likelihood of a biometric system mistakenly accepting an unauthorized user as a legitimate user. In other words, it measures the number of times that the biometric system incorrectly matches a sample from an impostor to the template of an enrolled user. For example, if a biometric system has a FAR of 0.1 percent, it means that out of every 1000 attempts by impostors to gain unauthorized access, the system will allow one impostor to be accepted as a legitimate user. A lower FAR indicates a more secure biometric system, as it reduces the risk of unauthorized access. It's worth noting that FAR should be balanced against false rejection rate (FRR), which measures the number of times the biometric system fails to recognize a legitimate user. A trade-off exists between FAR and FRR since FAR can be minimized by always rejecting, and the desired value of each metric depends on the specific use case and security requirements. Also, FARs presume the system will not be tested adversarially as most biometric systems can be presented with tailored false inputs that will false activate 100 percent of the time.
About the metric
You can click on the links to see the associated metrics
Lifecycle stage(s):
Target users:
Risk management stage(s):