Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Advai: Advanced Evalution of AI-Powered Identity Verification Systems

Jun 5, 2024

Advai: Advanced Evalution of AI-Powered Identity Verification Systems

The project introduces an innovative method to evaluate identity verification vendors' AI systems, crucial to online identity verification, which goes beyond traditional sample image dataset testing. As verification tools and the methods to deceive them become more complex, it is vital to assess advanced machine learning claims accurately. Advai offers a service to cross-evaluate various providers against vulnerabilities that are most critical to an organisation, ensuring increased resilience to online threats, adversarial activities, and fraud.

Our advanced, adversarial-driven approach is taken to combat the increasingly sophisticated fraud landscape and to provide organisations with the confidence that their chosen ID verification system is robust, unbiased, and optimised for real-world challenges.

There are many different components to identity verification, and we do not claim to address the full scope of vulnerabilities that these systems may have. However, Advai holds a market-leading library of adversarial techniques and tools and believes we could successfully tackle a handful of these vulnerabilities. 

Benefits of using the tool in this use case

Increased resilience and robustness of ID verification systems against fraud and adversarial attacks.

  • Enhanced understanding and control over potential biases and ethical concerns in ID verification.
  • Superior assessment capabilities leading to better-informed vendor selection, based on comprehensive, comparative benchmarks.
  • Real-world testing for vulnerabilities ensures the system's effectiveness across various conditions and user demographics.
  • Streamlined vendor evaluation process through user-friendly tools and comprehensive reporting. 

Shortcomings of using the tool in this use case

May require sophisticated understanding and collaboration from vendors to implement suggested improvements.

  • Continuous evolution of adversarial techniques means that robustness assessments may need to be frequently updated.
  • The approach could potentially lengthen the vendor selection process due to the depth and breadth of testing.
  • There might be a trade-off between enhanced security and user convenience or system performance.
     

Related links: 

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Objective(s):


Impacted stakeholders: