Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Framework for Ethical AI Governance



Framework for Ethical AI Governance

ETHICAL AI GOVERNANCE FRAMEWORK (A self-regulation model for companies to prevent societal harms based on the EU AI Act)

Balancing innovation and the development of AI with ethical and societal considerations always raises the classic chicken-and-egg dilemma, do we require legal enforcement in the form of obligations, requirements, and metrics for transparent, explainable, and accountable AI-based solutions? Or is it possible to achieve Explainable AI as an outcome of self-regulation initiatives by companies?

The framework proposes a self-regulated approach that incorporates certain tools (or instruments) to facilitate ethical and responsible AI development, drawing on insights from the EU Regulation on Artificial Intelligence (AI Act)

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.