Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Clear all

Documentation process

Clear all

Transparency

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

Submit
Approach Procedural
Approach Educational
Tool type Documentation process
Objective Transparency

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

TechnicalProceduralUnited StatesUnited KingdomEuropean UnionUploaded on Jul 11, 2024
Enzai’s EU AI Act Compliance Framework makes achieving compliance with the world’s first comprehensive AI governance legislation as easy as possible. The framework breaks hundreds of pages of regulations down into easy-to-follow steps allowing teams to enable organisations to seamlessly and confidently assess the compliance of their AI Systems with the Act and complete the requisite conformity assessments.

ProceduralUnited KingdomUploaded on May 21, 2024
The Algorithmic Transparency Recording Standard (ATRS) is a framework for capturing information about algorithmic tools, including AI systems. It is designed to help public sector bodies openly publish information about the algorithmic tools they use in decision-making processes that affect members of the public.

TechnicalEducationalProceduralSwedenSlovakiaCanadaFinlandUploaded on Nov 13, 2023
Use Saidot to empower AI teams to do high-quality AI governance efficiently and with quality

TechnicalEducationalProceduralUploaded on Apr 17, 2023<1 day
AI assurance as smart as your AI systems

TechnicalProceduralUploaded on Apr 13, 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

EducationalProceduralUploaded on Apr 13, 2023
This app matches information in model cards to proposed regulatory compliance descriptions in the EU AI Act. This is a prototype to explore the feasibility of automatic checks for compliance, and is limited to specific provisions of Article 13 of the Act, “Transparency and provision of information to users”.

Related lifecycle stage(s)

Deploy

TechnicalEducationalProceduralUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralFranceUploaded on Mar 20, 2023
Fairness metric selection tool for AI applications.

TechnicalProceduralUploaded on Mar 20, 2023
Automated AI testing, monitoring, and governance

TechnicalProceduralUploaded on Mar 20, 2023
An accessible framework (single A3 sheet) for governing the ethical development and use of AI - including the what (ethic), how (realisation principals), and submission of evidence for approval. Trustworthiness and bias are also covered

TechnicalProceduralUploaded on Mar 2, 2023
Zupervise is a unified risk transparency platform for AI governance.

TechnicalProceduralUploaded on Feb 14, 2023
AI governance, risk, and compliance software platform that helps organizations understand and manage the risks associated with AI, while meeting their evolving regulatory obligations

EducationalProceduralUploaded on Sep 15, 2022

This playbook is a legal research resource for various activities related to data gathering, data governance, and disposition of an AI model available as a public resource. It aims to benefit academic and government researchers including those in New York State who wish to understand how best to use AI models to provide natural language processing (“NLP”) as public infrastructure, but who do not have legal resources.


EducationalProceduralUploaded on Sep 15, 2022

Model cards are the tool for transparent AI documentation. Model cards are essential for discoverability, reproducibility, and sharing. You can find a model card as the README.md file in any model repo. Under the hood, model cards are simple Markdown files with additional metadata.

Related lifecycle stage(s)

Deploy

TechnicalEducationalProceduralUploaded on Sep 15, 2022

A documentation-first, human-centered data collection project as part of the BigScience initiative. A geographically diverse set of target language groups were identified (Arabic, Basque, Chinese, Catalan, English, French, Indic languages, Indonesian, Niger-Congo languages, Portuguese, Spanish, and Vietnamese, as well as programming languages) for which to collect metadata on potential data sources. To structure this effort, an online catalogue was developed as a supporting tool for gathering metadata through organized public hackathons.


Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.