Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

trail



trail

trail automates manual and time-intensive tasks of ML development to free time fore more projects. Generate automated documentation of code, models and data to increase knowledge sharing, reproducibility and compliance. Track experiments and store all artifacts in a central and accessible place. 

 

trail integrates with a few lines of code into your favorite development and works for any model and data type.

 

Ship to production with confidence, knowing your model performs the way you intend through integrated tests and quality checks. trail identifies compliance gaps in your development, recommends suitable actions and refactors them to code. 

 

Enabling your data science team to develop more, better and trustworthy AI. Makes you ready for AI audits and upcoming regulation. 

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.