Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

AI Algorithmic Transparency Tool

AI Algorithmic Transparency Tool

In order to maximize the positive impact of AI innovation, it is essential to design and operate technical, organizational, and social systems that enable stakeholders to recognize the risks of AI and to adjust their interests appropriately and flexibly. In other words, an agile governance framework for AI. This paper challenges the overarching and systematic organization of the most controversial governance approach, transparency of AI algorithms, and proposes a practically applicable toolkit. We should avoid hampering innovative AI in social implementation with misunderstandings caused by lack of communication, and the role that transparency can play in this regard is not small. However, if regulations and norms are formed each time a problem arises in the absence of a systematic organization, regulations will be repeatedly added without a blueprint, which lacks predictability and "transparency" will become a self-objective only for formality apart from the original purpose. As a result, innovation will be stifled. This is why such regulations must be systematic based on a unified concept, while pursuing generality, clarity, and flexibility with room for discretion so that business entities and government agencies can actually apply them.
In this paper, we have constructed a toolkit as a collection of systematic disclosures and examples, taking into account not only regulations proposed by national authorities but also a wide range of new risk events arising from various AIs, as well as prior research on AI algorithms. We also incorporated the viewpoint that the transparency degree that a business entity or government agency can devise can have a positive impact not only on its social credibility, but also on different indicators such as satisfaction. In addition, the toolkit was intended to be highly convenient for business entities and government agencies responding to self- and co-regulations by putting the disclosures in a list format and making it discretionary and selective in terms of AI algorithm providers, users, risks, and other factors. We would be more than happy if business entities and government agencies could use this toolkit for communication with users, society, authorities, and experts, or for internal risk management. We believe that this is one of the best agile governance practices to maximize the impact of AI innovation.
This toolkit is version 1.0 and we will continue to update it based on the points raised in future discussions. We would be grateful for guidance from readers in pointing out excesses and deficiencies.

About the tool

Impacted stakeholders:

Type of approach:

Usage rights:

Stakeholder group:

Geographical scope:

Technology platforms:


  • collaborative governance
  • data governance
  • open access
  • ai governance
  • transparency
  • accountability

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at