Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Digital Security

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

ProceduralItalyUploaded on Jun 19, 2025
ADMIT is a research tool within a broader methodological framework combining quantitative and qualitative strategies to identify, analyse, and mitigate social implications associated with automated decision-making systems while enhancing their potential benefits. It supports comprehensive assessments of sociotechnical impacts to inform responsible design, deployment, and governance of automation technologies.

TechnicalUnited StatesUploaded on May 19, 2025
HiddenLayer’s AISec Platform is a GenAI Protection Suite purpose-built to ensure the integrity of AI models throughout the MLOps pipeline. The platform provides detection and response for GenAI and traditional AI models to detect prompt injections, adversarial AI attacks, and digital supply chain vulnerabilities.

TechnicalCanadaUploaded on Apr 29, 2024>1 year
An on-prem AI security platform that automates AI red teaming and firewall monitoring.

TechnicalUploaded on Mar 27, 2023<1 day
All-in-one Trustworthy AI Platform

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.