Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

IEEE CertifAIEd



IEEE CertifAIEd

The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users.

IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We invite you to fill out the form to start receiving these specifications.

About the tool


Developing organisation(s):





Target sector(s):




Usage rights:


Target groups:



Stakeholder group:




Geographical scope:


People involved:


Required skills:


Technology platforms:


Modify this tool

Use Cases

Auditing a City of Vienna AI solution with a trustworthy AI framework

Auditing a City of Vienna AI solution with a trustworthy AI framework

IEEE SA has been developing a certification process for AI systems called IEEE CertifAIEd, which is based on the work of the IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS). IEEE CertifAIEd defines a risk and conform...
Mar 25, 2023

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.