Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Advai: Robustness Assessment of a Facial Verification System Against Adversarial Attacks

Jun 5, 2024

Advai: Robustness Assessment of a Facial Verification System Against Adversarial Attacks

Advai were involved in evaluating the resilience of a facial verification system used for authentication, namely in the context of preventing image manipulation and ensuring robustness against adversarial attacks. The focus was on determining the system's ability to detect fraudulent efforts to bypass facial verification or ‘liveness’ detection and to resist manipulations from fake imagery and feature space attacks.

This multifaceted attack approach was employed to uncover weaknesses that could be exploited by bad actors. Rigorous, empirical methods that exploit algorithmic traits of a system allow for a more objective analysis of the system. This surpasses the standard approach in industry, involving the testing of these systems using testing datasets (of normal unaltered images) to assign only an accuracy score. Further, the approach identifies specific components of model vulnerability thereby providing clear next steps for improving the facial verification system's robustness. 

Benefits of using the tool in this use case

Enhanced security through the discovery of system vulnerabilities and the implementation of training data and system mitigations.

  • Increased trust in the facial verification system from users and clients due to its measurable resistance to sophisticated attacks.
  • Insights into the model’s biases, allowing for the instalment of operational boundaries to prevent these biases, and the development of a better approach to procuring representative data.
  • Assurance for stakeholders through a demonstrated comparative advantage over other industry benchmarks. 

Shortcomings of using the tool in this use case

The adversarial attacks may not encompass all potential real-world scenarios, especially as attack methodologies evolve.

  • Findings may necessitate continuous re-evaluation and updating of the system’s security measures.
  • Recommendations may lead to increased complexity and cost in the system’s operation and maintenance.

Related links:

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Objective(s):


Impacted stakeholders: