Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Origin

Scope

Clear all

Health

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

Submit
Approach Procedural
Approach Educational
Target sector(s) Health

ProceduralUploaded on Nov 20, 2025
MISSION KI's Compliance Monitor is a tool for monitoring compliance with legal frameworks to facilitate interoperability, data flows and benefit sharing.

EducationalUnited KingdomUploaded on Dec 9, 2024
Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product.

Related lifecycle stage(s)

Operate & monitorDeploy

ProceduralSingaporeUploaded on Oct 2, 2024
Resaro offers independent, third-party assurance of mission-critical AI systems. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements.

ProceduralUploaded on Jul 3, 2024
Defining quality measures for quantitatively evaluating system and software product quality in terms of characteristics and subcharacteristics defined in ISO/IEC 25010 and is intended to be used together with ISO/IEC 25010.

ProceduralUploaded on Jul 2, 2024
The document highlights quality objectives for organizations responsible for datasets. The document describes control of records during the lifecycle of datasets, including but not limited to data collection, annotation, transfer, utilization, storage, maintenance, updates, retirement, and other activities.

ProceduralUploaded on Jul 2, 2024
This document lists examples of and defines categories of use cases for machine learning in medicine for clinical practice.

ProceduralUploaded on Jul 2, 2024
This document gives the requirements, under which image recognition problems in medicine can be addressed using a Deep Learning image recognition system.

ProceduralUploaded on Jul 1, 2024
This standard identifies the core requirements and baseline for AI solutions in health care to be deemed as trustworthy.

ProceduralUploaded on Jul 1, 2024
This standard is aimed at establishing concepts and terminology for the performance and safety evaluation of artificial intelligence medical device, which covers basic technology, dataset, quality characteristics, quality evaluation and application scenario.

Objective(s)


ProceduralUploaded on May 27, 2024
PRIDAR (Prioritization, Research, Innovation, Development, Analysis, and Review) A Risk Management Framework

TechnicalEducationalSwitzerlandUploaded on Apr 22, 2024<1 day
A global communityof 500+ researchers from 57+ countries, with foundational free course in a Human Rights-based approach to AI development that explores concrete builds of systems centering human rights values. Community is further enriched by reading & discussion groups, and written community outputs.

TechnicalProceduralIsraelUploaded on Apr 11, 2024
Citrusx offers a multifaceted solution to connect all stakeholders in the company through an SDK, user-friendly UI, and automated reporting system.

ProceduralUploaded on Jan 24, 2024
This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.

ProceduralUploaded on Nov 14, 2023
Advice for policy-makers on how to plan, develop and implement federated learning systems to safely advance machine learning for health systems improvement.

TechnicalEducationalProceduralSwedenSlovakiaCanadaFinlandUploaded on Nov 13, 2023
Use Saidot to empower AI teams to do high-quality AI governance efficiently and with quality

TechnicalProceduralUnited StatesUploaded on Oct 2, 2023
Automate, simplify, and streamline your end-to-end AI risk management process.

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Based on abstract interpretations, Saimple leverages state-of-the-art techniques described in ISO/IEC 24029 standards for the assessment and validation of AI model robustness

TechnicalEducationalProceduralUploaded on Jul 4, 2023<1 hour
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

TechnicalProceduralGermanyUploaded on Jun 22, 2023<1 day
QuantPi’s puts AI systems to the test. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

TechnicalProceduralUnited KingdomUploaded on Jun 16, 2023
Aival provide independent quality assurance systems to evaluate and monitor AI in healthcare

Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.