Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Clear all

Audit Process

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

Submit
Tool type Audit Process
Approach Procedural

ProceduralUploaded on Nov 20, 2025
MISSION KI is developing a voluntary quality standard guideline for artificial intelligence (AI) that strengthens the reliability and trustworthiness of AI applications and systems. It sets a voluntary, evidence-based self-assessment framework for AI providers below the EU AI Act’s high-risk threshold. It defines six quality dimensions (data governance, non-discrimination, transparency, human oversight, reliability, AI-specific cybersecurity) and a stepwise procedure: describe the use case, analyse protection needs, rate requirements via a VCIO catalogue, document tests/evidence, validate findings, issue a report, and monitor validity.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

ProceduralUploaded on Nov 20, 2025
MISSION KI's Compliance Monitor is a tool for monitoring compliance with legal frameworks to facilitate interoperability, data flows and benefit sharing.

ProceduralItalyUploaded on Jun 19, 2025
ADMIT is a research tool within a broader methodological framework combining quantitative and qualitative strategies to identify, analyse, and mitigate social implications associated with automated decision-making systems while enhancing their potential benefits. It supports comprehensive assessments of sociotechnical impacts to inform responsible design, deployment, and governance of automation technologies.

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

ProceduralCanadaUploaded on Mar 31, 2025
This program provides organisations with a comprehensive, independent review of their AI approaches, ensuring alignment with consensus standards and enhancing trust among stakeholders and the public in their AI practices.

Related lifecycle stage(s)

Verify & validate

ProceduralNew ZealandUploaded on Jul 11, 2024<1 day
An Artificial Intelligence Governance Risk and Assurance Platform for implementation guidance and assurance

TechnicalProceduralUploaded on Jun 19, 2024
This is an interactive bot that offers advice on complying with existing and intended AI regulations in various jurisdictions and offers a general risk analysis of the selected technology.

Related lifecycle stage(s)

Verify & validate

ProceduralUploaded on May 27, 2024
PRIDAR (Prioritization, Research, Innovation, Development, Analysis, and Review) A Risk Management Framework

ProceduralUnited KingdomUploaded on May 21, 2024
The Algorithmic Transparency Recording Standard (ATRS) is a framework for capturing information about algorithmic tools, including AI systems. It is designed to help public sector bodies openly publish information about the algorithmic tools they use in decision-making processes that affect members of the public.

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

ProceduralSaudi ArabiaUploaded on Mar 26, 2024
AI Risk Assessment Tool for responsible, transparent and safe AI covering international compliance regulations and data / model evaluations

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

EducationalProceduralSwitzerlandGermanyUploaded on Oct 31, 2023
How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch has developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

EducationalProceduralUploaded on Sep 29, 2023
Human rights impact assessments of AI systems require meaningful engagement of individuals and groups most affected.It is particularly relevant for products and services using AI because there is still not enough practical guidance on how to involve stakeholders in the design, development and deployment of AI systems.

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Based on abstract interpretations, Saimple leverages state-of-the-art techniques described in ISO/IEC 24029 standards for the assessment and validation of AI model robustness

ProceduralUnited KingdomJapanEuropean UnionUploaded on Aug 31, 2023<1 day
Fujitsu AI Ethics Impact Assessment assesses the potential risks and unwanted consequences of an AI system throughout its lifecycle and produces evidence which can be used to engage with auditors, approvers, and stakeholders. This is a process-driven technology that allows to: 1) map all interactions among the stakeholders and the components of the AI system; 2) assess the ethical risks emerging from such interactions; 3) understand the mechanisms whereby incidental events could occur, based on previous AI ethics incidents.

TechnicalEducationalProceduralUploaded on Jul 4, 2023<1 hour
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

TechnicalProceduralGermanyUploaded on Jun 22, 2023<1 day
QuantPi’s puts AI systems to the test. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

ProceduralNetherlandsUnited KingdomUploaded on Jun 20, 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

Related lifecycle stage(s)

Verify & validate

Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.