Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Dioptra



Dioptra

Dioptra is an open source software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). It helps developers on determining which types of attacks may impact negatively their model's performance. Dioptra supports the Measure function of the NIST AI Risk Management Framework by providing functionality to assess, analyse, and track identified AI risks.

Motivation

Many of our systems' essential functions rely on machine learning algorithms. These model however have been shown to be vulnerable to a variety of adversarial attacks, which may cause them to misbehave in ways that can benefit the adversary. Given the need for these models to function properly, this has sparked the need to find defense systems to prevent and mitigate these attacks. Nowadays, there is still a challenge to develop and evaluate the robust metrics that account for the diversity of potential attacks.

The National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE) has built an experimentation testbed to begin to address the broader challenge of evaluation for attacks and defenses. The testbed aims to facilitate security evaluations of ML algorithms under a diverse set of conditions. To that end, the testbed has a modular design enabling researchers to easily swap in alternative datasets, models, attacks, and defenses. The result is the ability to advance the metrology needed to ultimately help secure ML-enabled systems.

Key properties

  • Reproducible: creates snapshots so experiments can be reproduced 
  • Traceable: tracks the full history of experiments and inputs
  • Extensible: support for expanding functionality and importing existing Python packages via a plugin system
  • Interoperable: system promotes interoperability between plugins
  • Modular: composure of new experiments can easily be attained using modular components in a yaml file
  • Secure: Dioptra provides user authentication with access controls coming soon
  • Interactive: includes an intuitive web interface where users can interact with Dioptra
  • Shareable and reusable: deployable in multi-tenant environment where users can share and reuse components

 

About the tool


Developing organisation(s):




Country of origin:



Type of approach:


Usage rights:





Geographical scope:


Risk management stage(s):


Tags:

  • evaluation
  • research
  • machine learning testing
  • model monitoring
  • robustness
  • red-teaming

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.