Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Origin

Scope

Health

Clear all

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Target sector(s) Health
Approach Procedural

ProceduralUploaded on Jul 3, 2024
Defining quality measures for quantitatively evaluating system and software product quality in terms of characteristics and subcharacteristics defined in ISO/IEC 25010 and is intended to be used together with ISO/IEC 25010.

Objective(s)


ProceduralUploaded on Jul 2, 2024
The document highlights quality objectives for organizations responsible for datasets. The document describes control of records during the lifecycle of datasets, including but not limited to data collection, annotation, transfer, utilization, storage, maintenance, updates, retirement, and other activities.

ProceduralUploaded on Jul 2, 2024
This document lists examples of and defines categories of use cases for machine learning in medicine for clinical practice.

Objective(s)


ProceduralUploaded on Jul 2, 2024
This document gives the requirements, under which image recognition problems in medicine can be addressed using a Deep Learning image recognition system.

Objective(s)


ProceduralUploaded on Jul 1, 2024
This standard identifies the core requirements and baseline for AI solutions in health care to be deemed as trustworthy.

Objective(s)


ProceduralUploaded on Jul 1, 2024
This standard is aimed at establishing concepts and terminology for the performance and safety evaluation of artificial intelligence medical device, which covers basic technology, dataset, quality characteristics, quality evaluation and application scenario.

Objective(s)


ProceduralUploaded on May 27, 2024
PRIDAR (Prioritization, Research, Innovation, Development, Analysis, and Review) A Risk Management Framework

TechnicalProceduralIsraelUploaded on Apr 11, 2024
Citrusx offers a multifaceted solution to connect all stakeholders in the company through an SDK, user-friendly UI, and automated reporting system.

ProceduralUploaded on Jan 24, 2024
This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.

Objective(s)

Related lifecycle stage(s)

Build & interpret modelPlan & design

ProceduralUploaded on Nov 14, 2023
Advice for policy-makers on how to plan, develop and implement federated learning systems to safely advance machine learning for health systems improvement.

TechnicalEducationalProceduralFinlandUploaded on Nov 13, 2023
Saidot is a platform that helps you create and manage AI applications that are safe, ethical and transparent. With Saidot, you can: 1) Adopt AI policies from a library of templates, 2) Identify, mitigate and monitor AI risks, 3) Share AI reports with your stakeholders and 4) Learn how to use generative AI responsibly.

TechnicalProceduralUnited StatesUploaded on Oct 2, 2023
Automate, simplify, and streamline your end-to-end AI risk management process.

TechnicalEducationalProceduralFranceUploaded on Sep 26, 2023
Our software equips data scientists and AI engineers with powerful tools to help them create robust and explainable AI systems in conformity with ISO standards. Trust Saimple to help you build the foundation for reliable AI.

TechnicalEducationalProceduralUploaded on Jul 4, 2023
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

TechnicalProceduralGermanyUploaded on Jun 22, 2023<1 day
QuantPi’s platform unites AI testing with AI governance. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

TechnicalProceduralUnited KingdomUploaded on Jun 16, 2023
Aival software allows non-technical users to evaluate commercial AI products in terms of performance, fairness, robustness and explainability (healthcare / imaging)

TechnicalProceduralUploaded on May 20, 2023<1 month
GuArdIan: Essential safeguard for businesses utilizing ChatGPT-like LLM technology


ProceduralUploaded on Mar 31, 2023
BetterBeliefs is an evidence-based and inclusive stakeholder engagement platform to create contextual, relevant and justified AI policy

Related lifecycle stage(s)

Operate & monitor

TechnicalProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.