Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Certification scheme

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Tool type Certification scheme
Approach Procedural

ProceduralSaudi ArabiaUploaded on Mar 26, 2024
LLM Survey for responsible and transparent and safe AI covering international compliance regulations and data/ model evaluations

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

ProceduralUploaded on Oct 19, 2023>1 year
The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Our software equips data scientists and AI engineers with powerful tools to help them create robust and explainable AI systems in conformity with ISO standards. Trust Saimple to help you build the foundation for reliable AI.

ProceduralNetherlandsUnited KingdomUploaded on Jun 20, 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

TechnicalProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

TechnicalProceduralUploaded on Mar 2, 2023
Zupervise is a unified risk transparency platform for AI governance.

ProceduralUploaded on Jun 10, 2022

The use of Artificial Intelligence (AI) is one of the most significant technological contributions that will permeate the life of western societies in the coming years, providing significant benefits, but also highlighting risks that need to be assessed and minimized. A reality as disruptive as AI requires that its technology and that the products and […]


TechnicalEducationalProceduralUploaded on May 4, 2022

The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users. IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We […]


ProceduralUnited StatesCanadaUploaded on Apr 28, 2022

In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work […]


TechnicalProceduralUploaded on Apr 27, 2022

Denmark’s new labelling program for IT security and responsible use of data. The D-seal will create digital trust for customers & consumers and drive digital accountability in companies. The D‑seal is relevant to all types of business and is adapted to the individual company. The number of criteria that a company has to meet depends […]

Objective(s)


ProceduralUploaded on Feb 23, 2022

A joint project to certify systems based on artificial intelligence (AI) used in autonomous driving and develop a ‘roadworthiness test’ for algorithms. To do so, the experts explore the learning behaviors of AI systems with the aim of being able to control the systems’ reactions.

Objective(s)


ProceduralGermanyUploaded on Jul 2, 2019

The purpose of the certification is to help establish quality standards for AI made in Europe, to ensure a responsible approach to this technology and to promote fair competition between the various players.


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.