Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Submit a tool to the Catalogue of Tools & Metrics for Trustworthy AI

The Catalogue of Tools & Metrics for Trustworthy AI is an interactive collection of the latest tools and resources to help AI actors be accountable and implement trustworthy AI systems. A tool is defined as an instrument, method or resource that can be leveraged to facilitate the implementation of the OECD AI Principles (e.g. toolkits to check for biases, robustness, risk management guidelines or educational material linked to AI systems). Tools can be used in different contexts and for different use cases. Your submission will be vetted by the OECD before being published on the catalogue. Thank you for your submission!

A short description that provides an overview of the tool (maximum 80 words).

Describe the background and objectives of this tool

Or pick an image from our library

Tool resources and links

At least one of the following fields is required: Link to the tool's website, Github repository of the tool, Gitlab repository, Hugging Face repository, Slack workspace

If available, please provide a link to the tool's package page.

If available, please provide a link to the tool's related post on OECD.AI's "AI Wonk" blog.


Tool categorization

Type


Usage rights & objectives


Origin


Scope


Adoptability


Implementation incentives


Keywords


Email*

Your relation to this tool*


Disclosures for contributors

After you click on 'Submit', the OECD Secretariat will review your submission.