Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Transparency

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

ProceduralUploaded on Nov 20, 2025
Mission KI is developing a voluntary quality standard guideline for artificial intelligence (AI) that strengthens the reliability and trustworthiness of AI applications and systems.

Objective(s)


ProceduralUploaded on Nov 20, 2025
Mission KI's Compliance Monitor is a tool for monitoring compliance with legal frameworks to facilitate interoperability, data flows and benefit sharing.

EducationalUploaded on Nov 7, 2025
As part of the Mission KI project, the initiative has developed the innovative data set search engine (Daseen), which for the first time enables cross-source searches for data sets.

TechnicalUploaded on Oct 9, 2025
An open-source framework for large language model evaluations. Inspect can be used for a broad range of evaluations that measure coding, agentic tasks, reasoning, knowledge, behavior, and multi-modal understanding.

Related lifecycle stage(s)

Operate & monitorVerify & validate

EducationalUploaded on Aug 27, 2025
Elements of AI is a free online course, offered in Slovakia by AIslovakIA and Comenius University. Created by the University of Helsinki and Reaktor with EU support, it introduces the basics of artificial intelligence through six interactive modules.

Related lifecycle stage(s)

Operate & monitor

TechnicalProceduralPolandUploaded on Jul 23, 2025
CAST is an open framework for responsible AI design and engineering. It offers design heuristics and patterns, and RAI recommendations through generative features and online content.

ProceduralItalyUploaded on Jun 19, 2025
ADMIT is a research tool within a broader methodological framework combining quantitative and qualitative strategies to identify, analyse, and mitigate social implications associated with automated decision-making systems while enhancing their potential benefits. It supports comprehensive assessments of sociotechnical impacts to inform responsible design, deployment, and governance of automation technologies.

TechnicalIrelandUploaded on May 2, 2025
Risk Atlas Nexus provides tooling to connect fragmented AI governance resources through a community-driven approach to curation of linkages between risks, datasets, benchmarks, and mitigations. It transforms abstract risk definitions into actionable AI governance workflows.

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

TechnicalUnited StatesUploaded on Mar 24, 2025
An open-source Python library designed for developers to calculate fairness metrics and assess bias in machine learning models. This library provides a comprehensive set of tools to ensure transparency, accountability, and ethical AI development.

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

TechnicalUnited StatesUploaded on Jan 8, 2025
MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios like AI chatbots, image classification, etc. The benchmark evaluates performance across different hardware and software configurations, providing command line interface.

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

TechnicalInternationalUploaded on Dec 6, 2024
Evaluating machine learning agents on machine learning engineering.

TechnicalUnited StatesUploaded on Nov 8, 2024
The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

Related lifecycle stage(s)

Operate & monitorVerify & validate

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

EducationalUnited StatesUploaded on Nov 6, 2024<1 day
The deck of 50 Trustworthy AI Cards™ correspond to 50 most relevant concepts under the 5 categories of: Data, AI, Generative AI, Governance, and Society. The Cards are used to create awareness and literacy on opportunities and risks about AI, and how to govern these technologies.

ProceduralFranceUploaded on Oct 25, 2024
Online tool for estimating the carbon emissions generated by AI model usage.

Related lifecycle stage(s)

Plan & design

EducationalUnited StatesUploaded on Sep 9, 2024
Judgment Call is an award-winning responsible innovation game and team-based activity that puts Microsoft's AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action. The game provides an easy-to-use method for cultivating stakeholder empathy through scenario-imagining.

ProceduralJapanUploaded on Sep 9, 2024
The General Understanding on AI and Copyright, released by Japan Copyright Office aims at providing clarity on how Japan's current Copy Right Act should be applied in relation to AI technologies. Three main topics presented include: the training stage of AI developments, the generation, utilisation stage, and copyrightability of AI-generated material.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.