Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Certification scheme

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

EducationalUploaded on Jul 11, 2024<1 week
The Digital Value Management Overlay System (DVMS) provides organizations of any size, scale, or complexity an affordable way to mitigate cybersecurity risk to assure digital business performance, resilience & trust

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Based on abstract interpretations, Saimple leverages state-of-the-art techniques described in ISO/IEC 24029 standards for the assessment and validation of AI model robustness

TechnicalEducationalProceduralUploaded on May 4, 2022

The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users. IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We […]


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.