Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Data Governance & Traceability

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

TechnicalUploaded on Nov 20, 2025
openIMIS is a versatile open source software which supports the administration of health financing and social protection schemes.

TechnicalUploaded on Nov 20, 2025
EuroDaT is a European data trustee with a unique data transaction principle that ensures the secure and legally compliant exchange of data between any parties.

Related lifecycle stage(s)

Collect & process data

ProceduralUploaded on Nov 20, 2025
Mission KI's Compliance Monitor is a tool for monitoring compliance with legal frameworks to facilitate interoperability, data flows and benefit sharing.

EducationalUploaded on Nov 7, 2025
As part of the Mission KI project, the initiative has developed the innovative data set search engine (Daseen), which for the first time enables cross-source searches for data sets.

TechnicalProceduralUploaded on Aug 14, 2025>1 year
A legally enforceable AI-user interaction framework that verifies informed consent through multimodal methods, protects user intellectual property via blockchain-based tracking, and ensures lifetime authorship rights with legal safeguards against unauthorized use or AI training reuse.

Related lifecycle stage(s)

Plan & design

TechnicalEducationalUploaded on Aug 27, 2025
A robust multi-agent reinforcement learning framework that evaluates and recommends intervention practices for elementary school children.

EducationalUploaded on Aug 6, 2025
Software that helps educational providers identify handwriting issues and students (aged 5-12) improve their handwriting

TechnicalEuropeUploaded on May 19, 2025
The AIFS is the first fully operational weather prediction open model using machine learning technology for weather forecasting.

ProceduralCanadaUploaded on Apr 1, 2025
An artificial intelligence (AI) impact assessment tool to provide organisations a method to assess AI systems for compliance with Canadian human rights law. The purpose of this human rights AI impact assessment is to assist developers and administrators of AI systems to identify, assess, minimise or avoid discrimination and uphold human rights obligations throughout the lifecycle of an AI system.

TechnicalIndiaUploaded on Apr 3, 2025
The Infosys Responsible AI toolkit provides a set of APIs to integrate safety, security, privacy, explainability, fairness, and hallucination detection into AI solutions, ensuring trustworthiness and transparency.

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

TechnicalUnited StatesUploaded on May 19, 2025
HiddenLayer’s AISec Platform is a GenAI Protection Suite purpose-built to ensure the integrity of AI models throughout the MLOps pipeline. The platform provides detection and response for GenAI and traditional AI models to detect prompt injections, adversarial AI attacks, and digital supply chain vulnerabilities.

ProceduralCanadaUploaded on Mar 31, 2025
This program provides organisations with a comprehensive, independent review of their AI approaches, ensuring alignment with consensus standards and enhancing trust among stakeholders and the public in their AI practices.

Related lifecycle stage(s)

Verify & validate

ProceduralFranceUploaded on Apr 2, 2025
This tool provides a comprehensive risk management framework for frontier AI development, integrating established risk management principles with AI-specific practices. It combines four key components: risk identification through systematic methods, quantitative risk analysis, targeted risk treatment measures, and clear governance structures.

Related lifecycle stage(s)

Build & interpret model

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

TechnicalSwitzerlandEuropean UnionUploaded on Jan 24, 2025
COMPL-AI is an open-source compliance-centered evaluation framework for Generative AI models

United KingdomUploaded on Jan 9, 2025
This document provides a structured framework for gaining informed consent from individuals before using their copyright works (including posts, articles, or comments), Name, Image, Likeness (NIL), or other Personal Data, in a engineered system. It pulls together best current practice from many sources including GDPR, the Article 29 Working Party, multiple ISO standards and the NIST RMF framework and presents it one place.

EducationalUnited KingdomUploaded on Dec 9, 2024
Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product.

Related lifecycle stage(s)

Operate & monitorDeploy

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.