Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Certification scheme
Trust/Quality mark

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit
Tool type Certification scheme
Tool type Trust/Quality mark
Approach Procedural
Approach Educational

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

Objective(s)


EducationalUploaded on Jul 11, 2024<1 week
The Digital Value Management Overlay System (DVMS) provides organizations of any size, scale, or complexity an affordable way to mitigate cybersecurity risk to assure digital business performance, resilience & trust

TechnicalProceduralIsraelUploaded on Apr 11, 2024
Citrusx offers a multifaceted solution to connect all stakeholders in the company through an SDK, user-friendly UI, and automated reporting system.

ProceduralSaudi ArabiaUploaded on Mar 26, 2024
AI Risk Assessment Tool for responsible, transparent and safe AI covering international compliance regulations and data / model evaluations

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

Objective(s)


ProceduralMexicoSpainChileColombiaUruguayUploaded on Oct 19, 2023>1 year
The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Based on abstract interpretations, Saimple leverages state-of-the-art techniques described in ISO/IEC 24029 standards for the assessment and validation of AI model robustness

ProceduralNetherlandsUnited KingdomUploaded on Jun 20, 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

Related lifecycle stage(s)

Verify & validate

ProceduralUploaded on Apr 27, 2023<1 day
Trustworthy AI (Measuing trust)

Objective(s)

Related lifecycle stage(s)

Verify & validatePlan & design

TechnicalEducationalProceduralUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

TechnicalProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

TechnicalProceduralUploaded on Mar 27, 2023<1 day
A scalable and systematic solution empowering enterprise to adopt and scale AI with confidence and enhance business performance.

TechnicalProceduralUploaded on Mar 27, 2023<1 day
A fully independent and impartial audit to ensure compliance with the bias audit requirements for New York City Local Law 144 and upcoming regulations.

TechnicalProceduralUploaded on Mar 27, 2023<1 day
A bespoke AI risk audit solution tailor-made to identify your enterprise project’s AI risks, comprising deep technical, quantitative analysis.

TechnicalProceduralUploaded on Mar 27, 2023
A set of guides to help enterprises mitigate some of the most common AI risks, presenting step-by-step solutions to protect against technical risks.

TechnicalProceduralUnited KingdomUploaded on Mar 27, 2023
The Holistic AI library is an open-source tool to assess and improve the trustworthiness of AI systems. The library offers a set of techniques to easily measure and mitigate Bias across a variety of tasks.

TechnicalProceduralUploaded on Mar 2, 2023
Zupervise is a unified risk transparency platform for AI governance.

EducationalProceduralUploaded on Mar 2, 2023
Naaia, the first AIMS on the market in Europe, is the governance and management solution for AI systems without compromising ethics or compliance (AI ACT)

Objective(s)


ProceduralUploaded on Sep 16, 2022
A standardised label / short datasheet that can be attached to AI products to show their charactaristics with regards to principles like transparency, accountability, privacy, fairness and reliability.

EducationalProceduralUploaded on Sep 15, 2022

Model cards are the tool for transparent AI documentation. Model cards are essential for discoverability, reproducibility, and sharing. You can find a model card as the README.md file in any model repo. Under the hood, model cards are simple Markdown files with additional metadata.


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.