Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Documentation process

Clear all

Transparency

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit
Approach Procedural
Approach Educational
Tool type Documentation process
Objective Transparency

TechnicalProceduralEUUploaded on May 2, 2025
Croissant is an open-source framework developed by MLCommons to standardise dataset descriptions, enhance data discoverability, and facilitate automated use across machine-learning tasks. Croissant ensures datasets are consistently documented by providing structured metadata schemas, improving interoperability, transparency, and ease of integration.

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

ProceduralTechnicalUnited StatesUnited KingdomEuropean UnionUploaded on Jul 11, 2024
Enzai’s EU AI Act Compliance Framework makes achieving compliance with the world’s first comprehensive AI governance legislation as easy as possible. The framework breaks hundreds of pages of regulations down into easy-to-follow steps allowing teams to enable organisations to seamlessly and confidently assess the compliance of their AI Systems with the Act and complete the requisite conformity assessments.

Related lifecycle stage(s)

Plan & designOperate & monitorDeploy

ProceduralUnited KingdomUploaded on May 21, 2024
The Algorithmic Transparency Recording Standard (ATRS) is a framework for capturing information about algorithmic tools, including AI systems. It is designed to help public sector bodies openly publish information about the algorithmic tools they use in decision-making processes that affect members of the public.

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

TechnicalProceduralUnited StatesUploaded on Nov 29, 2023
AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications should meet during their design phase that serve as guardrails against these threats. These requirements help scope the threats such applications must be protected against.

TechnicalEducationalProceduralSwedenSlovakiaCanadaFinlandUploaded on Nov 13, 2023
Use Saidot to empower AI teams to do high-quality AI governance efficiently and with quality

TechnicalProceduralUnited StatesUploaded on Oct 3, 2023
Comprehensive AI governance and compliance platform focused on empowering organizations with the risk management capabilities they need to adopt AI swiftly and responsibly.


TechnicalProceduralUploaded on Apr 13, 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

EducationalProceduralUploaded on Apr 13, 2023
This app matches information in model cards to proposed regulatory compliance descriptions in the EU AI Act. This is a prototype to explore the feasibility of automatic checks for compliance, and is limited to specific provisions of Article 13 of the Act, “Transparency and provision of information to users”.

Objective(s)

Related lifecycle stage(s)

Deploy

ProceduralUploaded on Apr 5, 2023<1 day
FRAIA assesses the risks to human rights posed by algorithms and promotes measures to address them. It fosters dialogue among professionals involved in algorithm development or deployment. The client is accountable for implementing FRAIA to prevent uncertain outcomes of algorithm use. FRAIA mitigates risks of carelessness, ineffectiveness, or violation of citizens' rights.

TechnicalProceduralSloveniaUploaded on Apr 4, 2023
QLECTOR LEAP is a cloud-based application that helps guide production processes in the same way the GPS supports your road trip. It uses historical data and AI to forecast production and suggest optimal measures when unplanned events occur. It combines machine learning methods specific for manufacturing and uses own AI technology to build and maintain data-driven digital twin of a factory.

Related lifecycle stage(s)

Verify & validate

TechnicalEducationalProceduralUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

TechnicalProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

TechnicalProceduralUploaded on Mar 20, 2023
Automated AI testing, monitoring, and governance

TechnicalProceduralUploaded on Mar 20, 2023
An accessible framework (single A3 sheet) for governing the ethical development and use of AI - including the what (ethic), how (realisation principals), and submission of evidence for approval. Trustworthiness and bias are also covered

TechnicalProceduralUploaded on Mar 2, 2023
Zupervise is a unified risk transparency platform for AI governance.

TechnicalProceduralUploaded on Feb 14, 2023
AI governance, risk, and compliance software platform that helps organizations understand and manage the risks associated with AI, while meeting their evolving regulatory obligations

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.