Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

IBM Everyday Ethics for Artificial Intelligence



IBM Everyday Ethics for Artificial Intelligence

This document represents the beginning of a conversation defining Everyday Ethics for AI. Ethics must be embedded in the design and development process from the very beginning of AI creation. Rather than strive for perfection first, we’re releasing this to allow all who read and use this to comment, critique and participate in all future iterations.

So please experiment, play, use, and break what you find here and send us your feedback. Designers and developers of AI systems are encouraged to be aware of these concepts and seize opportunities to intentionally put these ideas into practice. As you work with your team and others, please share this guide with them.

About the tool


Developing organisation(s):


Tool type(s):



Impacted stakeholders:


Country of origin:


Lifecycle stage(s):


Type of approach:



Target groups:


Target users:


Stakeholder group:



Geographical scope:


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.