Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Certification scheme
Trust/Quality mark

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Fairness

ProceduralUploaded on 19 oct. 2023>1 year
The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.

ProceduralEducationalFranceUploaded on 26 sept. 2023
Our software equips data scientists and AI engineers with powerful tools to help them create robust and explainable AI systems in conformity with ISO standards. Trust Saimple to help you build the foundation for reliable AI.

ProceduralNetherlandsUnited KingdomUploaded on 20 juin 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

ProceduralUploaded on 27 avr. 2023<1 day
A Psychometric scale to measure how users perceive and trust technology which can be applied to a variety of domains such as business, healthcare, public sector, among many others.

ProceduralEducationalUploaded on 4 avr. 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralUploaded on 30 mars 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

ProceduralUploaded on 27 mars 2023<1 day
A scalable and systematic solution empowering enterprise to adopt and scale AI with confidence and enhance business performance.

ProceduralUploaded on 27 mars 2023<1 day
A fully independent and impartial audit to ensure compliance with the bias audit requirements for New York City Local Law 144 and upcoming regulations.

ProceduralUploaded on 27 mars 2023<1 day
A bespoke AI risk audit solution tailor-made to identify your enterprise project’s AI risks, comprising deep technical, quantitative analysis.

ProceduralUploaded on 27 mars 2023
A set of guides to help enterprises mitigate some of the most common AI risks, presenting step-by-step solutions to protect against technical risks.

ProceduralUnited KingdomUploaded on 27 mars 2023
The Holistic AI library is an open-source tool to assess and improve the trustworthiness of AI systems. The library offers a set of techniques to easily measure and mitigate Bias across a variety of tasks.

ProceduralUploaded on 2 mars 2023
Zupervise is a unified risk transparency platform for AI governance.

ProceduralEducationalUploaded on 2 mars 2023
Naaia, the first AIMS on the market in Europe, is the governance and management solution for AI systems without compromising ethics or compliance (AI ACT)

ProceduralUploaded on 16 sept. 2022
A standardised label / short datasheet that can be attached to AI products to show their charactaristics with regards to principles like transparency, accountability, privacy, fairness and reliability.

ProceduralEducationalUploaded on 15 sept. 2022

Model cards are the tool for transparent AI documentation. Model cards are essential for discoverability, reproducibility, and sharing. You can find a model card as the README.md file in any model repo. Under the hood, model cards are simple Markdown files with additional metadata.


ProceduralUploaded on 10 juin 2022

The use of Artificial Intelligence (AI) is one of the most significant technological contributions that will permeate the life of western societies in the coming years, providing significant benefits, but also highlighting risks that need to be assessed and minimized. A reality as disruptive as AI requires that its technology and that the products and […]


ProceduralSwitzerlandUploaded on 10 juin 2022

We believe that trust, transparency and technology belong together. But as digitalization accelerates, it is getting more and more difficult to understand what’s happening with your data. Algorithms and other digital tools operate in the background and can leave you feeling insecure when using digital services. With the Digital Trust Label, we’re putting trust and […]


ProceduralEducationalUploaded on 4 mai 2022

The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users. IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We […]


ProceduralUnited StatesCanadaUploaded on 28 avr. 2022

In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work […]


ProceduralUploaded on 27 avr. 2022

Denmark’s new labelling program for IT security and responsible use of data. The D-seal will create digital trust for customers & consumers and drive digital accountability in companies. The D‑seal is relevant to all types of business and is adapted to the individual company. The number of criteria that a company has to meet depends […]

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.

Inscrivez-vous pour recevoir des alertes de notre blog, le AI Wonk: