Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Transparency

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

TechnicalProceduralUploaded on Aug 14, 2025>1 year
A legally enforceable AI-user interaction framework that verifies informed consent through multimodal methods, protects user intellectual property via blockchain-based tracking, and ensures lifetime authorship rights with legal safeguards against unauthorized use or AI training reuse.

ProceduralUploaded on Aug 1, 2025
BeSpecial is an AI-driven platform designed to support university students with dyslexia by providing personalized digital tools and tailored learning strategies. Developed within the European VRAILEXIA project, BeSpecial combines clinical data, self-assessments, and psychometric tests to recommend customized resources like audiobooks and concept maps, as well as inclusive academic practices. The platform also raises awareness and trains educators to foster inclusive higher education environments.

Related lifecycle stage(s)

Operate & monitorDeploy

TechnicalEducationalUploaded on Aug 1, 2025
Dytective by Change Dyslexia is an innovative AI-powered tool designed to detect the risk of dyslexia in children quickly and reliably. Developed in collaboration with researchers, Dytective combines language exercises with machine learning to screen for dyslexia in just 15 minutes. Backed by scientific validation and used by schools and families worldwide, it empowers early intervention and promotes equal opportunities in education.

Related lifecycle stage(s)

Operate & monitorDeploy

TechnicalProceduralPolandUploaded on Jul 23, 2025
CAST is an open framework for responsible AI design and engineering. It offers design heuristics and patterns, and RAI recommendations through generative features and online content.

Related lifecycle stage(s)

Build & interpret modelPlan & design

TechnicalIrelandUploaded on May 2, 2025
Risk Atlas Nexus provides tooling to connect fragmented AI governance resources through a community-driven approach to curation of linkages between risks, datasets, benchmarks, and mitigations. It transforms abstract risk definitions into actionable AI governance workflows.

TechnicalProceduralEUUploaded on May 2, 2025
Croissant is an open-source framework developed by MLCommons to standardise dataset descriptions, enhance data discoverability, and facilitate automated use across machine-learning tasks. Croissant ensures datasets are consistently documented by providing structured metadata schemas, improving interoperability, transparency, and ease of integration.

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

Objective(s)


ProceduralCanadaUploaded on Mar 31, 2025
This program provides organisations with a comprehensive, independent review of their AI approaches, ensuring alignment with consensus standards and enhancing trust among stakeholders and the public in their AI practices.

Related lifecycle stage(s)

Verify & validate

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

TechnicalSwitzerlandEuropean UnionUploaded on Jan 24, 2025
COMPL-AI is an open-source compliance-centered evaluation framework for Generative AI models

TechnicalProceduralUnited StatesUploaded on Dec 6, 2024
Vectice is a regulatory MLOps platform for AI/ML developers and validators that streamlines documentation, governance, and collaborative reviewing of AI/ML models. Designed to enhance audit readiness and ensure regulatory compliance, Vectice automates model documentation, from development to validation. With features like automated lineage tracking and documentation co-pilot, Vectice empowers AI/ML developers and validators to work in their favorite environment while focusing on impactful work, accelerating productivity, and reducing risk.

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

EducationalUnited StatesUploaded on Nov 6, 2024<1 day
The deck of 50 Trustworthy AI Cards™ correspond to 50 most relevant concepts under the 5 categories of: Data, AI, Generative AI, Governance, and Society. The Cards are used to create awareness and literacy on opportunities and risks about AI, and how to govern these technologies.

ProceduralSingaporeUploaded on Oct 2, 2024
Resaro offers independent, third-party assurance of mission-critical AI systems. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements.

ProceduralUnited KingdomUploaded on Oct 2, 2024
Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

EducationalUnited StatesUploaded on Nov 5, 2024
Community jury is concept where multiple stakeholders impacted by a same technology are given the possibility to learn about a project, discuss with one another and provide feedback.

Objective(s)


TechnicalFranceUploaded on Aug 2, 2024
Evaluate input-output safeguards for LLM systems such as jailbreak and hallucination detectors, to understand how good they are and on which type of inputs they fail.

TechnicalUploaded on Aug 2, 2024
Responsible AI (RAI) Repairing Assistant

ProceduralNew ZealandUploaded on Jul 11, 2024
The Algorithm Charter for Aotearoa New Zealand is a set of voluntary commitments developed by Stats NZ in 2020 to increase public confidence and visibility around the use of algorithms within Aotearoa New Zealand’s public sector. In 2023, Stats NZ commissioned Simply Privacy to develop the Algorithm Impact Assessment Toolkit (AIA Toolkit) to help government agencies meet the Charter commitments. The AIA Toolkit is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms.

ProceduralUploaded on Jul 2, 2024
BSI Flex 1890 defines terms, abbreviations, and acronyms for the connected and automated vehicles (CAVs) sector, focused on those relating to vehicles and associated technologies.

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.