Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Advai Versus



Advai Versus

Advai Versus is a tool for developers to test and evaluate a company's AI systems. Integrated within the MLOps architecture, Advai Versus can be used to test for biases, security, and other critical aspects, ensuring that the AI models are robust and fit for purpose.

Valid for any AI model, and applicable at each stage of data pipelines, this tool works to improve any system's model. The tool's application will be dependent on the boundaries of operation of the AI system, taking its appropriate field use when defining robustness. The tool tests for data quality and reliability, checking for gaps in the training data. In addition, cognitive probing tests and detection of AI model attacks allow to identify when the AI system(s) are being duped and the perception of the system(s).

Key features include:

  • Automated integration: integrated into MLOps to enhance functionality.
  • AI model assurance: evaluation of AI models to ensure they meet Advai standards. 
  • Comprehensive testing: range of services to test various aspects including bias and security.
  • Red teaming: assures and challenges AI models to fortify them.

This tool is one of the three tools offered by Advai, it is the second step of the process, following Advai Advance and followed by Advai Insight. 

About the tool




Country of origin:


Type of approach:



Usage rights:


Target groups:



Stakeholder group:



Modify this tool

Use Cases

Advai: Robustness Assessment of a Facial Verification System Against Adversarial Attacks

Advai: Robustness Assessment of a Facial Verification System Against Adversarial Attacks

Advai were involved in evaluating the resilience of a facial verification system used for authentication, namely in the context of preventing image manipulation and ensuring robustness against adversarial attacks. The focus was on determining the sys...
Jun 5, 2024

Advai: Advanced Evalution of AI-Powered Identity Verification Systems

Advai: Advanced Evalution of AI-Powered Identity Verification Systems

The project introduces an innovative method to evaluate identity verification vendors' AI systems, crucial to online identity verification, which goes beyond traditional sample image dataset testing. As verification tools and the methods to deceive t...
Jun 5, 2024

Advai: Operational Boundaries Calibration for AI Systems via Adversarial Robustness Techniques

Advai: Operational Boundaries Calibration for AI Systems via Adversarial Robustness Techniques

To enable AI systems to be deployed safely and effectively in enterprise environments, there must be a solid understanding of their fault tolerances in response to adversarial stress-testing methods. Our stress-testing tools identifies vulnerabi...
Jun 5, 2024

Advai: Assurance of Computer Vision AI in the Security Industry

Advai: Assurance of Computer Vision AI in the Security Industry

Advai’s toolkit can be applied to assess the performance, security and robustness of an AI model used for object detection. Systems require validation to ensure they can reliably detect various objects within challenging visual environments. Our tech...
Jun 5, 2024

Advai: Implementing a Risk-Driven AI Regulatory Compliance Framework

Advai: Implementing a Risk-Driven AI Regulatory Compliance Framework

As AI becomes central to organisational operations, it is crucial to align AI systems and models with emerging regulatory requirements globally. This use case focuses on integrating a risk-driven approach, based on and aligning with ISO 31000 princip...
Jun 5, 2024

Advai: Regulatory Aligned AI Deployment Framework

Advai: Regulatory Aligned AI Deployment Framework

The adoption of AI systems in high-risk domains demands adherence to stringent regulatory frameworks to ensure safety, transparency, and accountability. This use case focuses on deploying AI in a manner that not only meets performance metrics but als...
Jun 5, 2024

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.