Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

ICO Explaining decisions made with AI 



ICO Explaining decisions made with AI 

Guidance for organisations on how to implement explainable AI solutions in compliance with a range of legislation, including data protection legislation. It advises on how to build and operate systems that allow explanations to be provided to individuals that are affected by the decisions made by the system.

About the tool



Tool type(s):


Objective(s):


Impacted stakeholders:


Type of approach:






Stakeholder group:



Geographical scope:


Required skills:


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.