Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Credo AI Transparency Reports: Facial Recognition application

Jun 5, 2024

Credo AI Transparency Reports: Facial Recognition application

A facial recognition service provider required trustworthy Responsible AI fairness reporting to meet critical customer demand for transparency. The service provider used Credo AI’s platform to provide transparency on fairness and performance evaluations of its identity verification service to its customers.

The service provider was able to take actionable steps to ensure the development and performance was aligned with requirements from regulators, standard-setting bodies, and industry best practices. The Credo AI platform enabled the provider to use False Non-Match Rate and False Match Rate to measure performance and disparities at varying confidence thresholds, based on the guidelines set forth by the National Institute of Standards and Technology’s (NIST) face recognition vendor tests.

Benefits of using the tool in this use case

Credo AI helped the service provider curate a representative image dataset of real subjects. The diversity of age, genders, apparent skin types, and ambient lighting conditions, the reliability of annotations, and the fact that this dataset has never been used by the customer made it an effective dataset for our assessment. More than 100 million pairwise identity verification comparisons were made possible and performed to ensure the results are statistically significant.

A Responsible AI fairness report was generated to provide actionable findings and insights into the performance and fairness of the service. The transparency report communicated disaggregated performance metrics across intersectional demographic groups, undesirable biases that the service may exhibit, and the groups for which mitigation is the most needed.

This process helped illustrate how transparency and disclosure reporting can encourage responsible practices to be cultivated, engineered, and managed throughout the AI development life cycle. 

Shortcomings of using the tool in this use case

This approach underscored the significance of data availability and diversity in assessing fairness, revealing its limitations in accounting for age-related facial changes, thereby affecting its applicability to practical facial recognition scenarios. Additionally, the dataset exhibited insufficient variability in image quality, failing to capture the breadth of real-world technology usage conditions.

Related links: 

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Objective(s):


Impacted stakeholders:


Country of origin: