Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

TechnicalProceduralUploaded on Aug 14, 2025>1 year
A legally enforceable AI-user interaction framework that verifies informed consent through multimodal methods, protects user intellectual property via blockchain-based tracking, and ensures lifetime authorship rights with legal safeguards against unauthorized use or AI training reuse.

EducationalUnited KingdomUploaded on Jun 19, 2025
The Code sets out a process to identify, evaluate, and mitigate the known risks of AI to children and prepare for the known unknowns. It requires those who build and deploy AI systems to consider the foreseeable risks to children by design and default.

TechnicalEducationalMexicoUnited StatesIsraelUploaded on May 19, 2025
SeismicAI is a provider of innovative Earthquake Early Warning Systems (EEW) ensuring earthquake preparedness. SeismicAI's algorithms utilise local sensors to issue high-precision alerts for earthquake preparedness. The system covers the full early warning cycle - from monitoring and reporting, through alerts, to optionally triggering automated preventive actions.

Related lifecycle stage(s)

Operate & monitor

TechnicalEuropeUploaded on May 19, 2025
The AIFS is the first fully operational weather prediction open model using machine learning technology for weather forecasting.

TechnicalIndiaUploaded on Apr 3, 2025
The Infosys Responsible AI toolkit provides a set of APIs to integrate safety, security, privacy, explainability, fairness, and hallucination detection into AI solutions, ensuring trustworthiness and transparency.

Related lifecycle stage(s)

DeployPlan & design

TechnicalUnited StatesUploaded on May 19, 2025
HiddenLayer’s AISec Platform is a GenAI Protection Suite purpose-built to ensure the integrity of AI models throughout the MLOps pipeline. The platform provides detection and response for GenAI and traditional AI models to detect prompt injections, adversarial AI attacks, and digital supply chain vulnerabilities.

ProceduralCanadaUploaded on Mar 31, 2025
This program provides organisations with a comprehensive, independent review of their AI approaches, ensuring alignment with consensus standards and enhancing trust among stakeholders and the public in their AI practices.

Related lifecycle stage(s)

Verify & validate

ProceduralFranceUploaded on Apr 2, 2025
This tool provides a comprehensive risk management framework for frontier AI development, integrating established risk management principles with AI-specific practices. It combines four key components: risk identification through systematic methods, quantitative risk analysis, targeted risk treatment measures, and clear governance structures.

Objective(s)

Related lifecycle stage(s)

Build & interpret model

EducationalIrelandUploaded on Jan 29, 2025
The AI Risk Ontology (AIRO) is an open-source formal ontology that provides a minimal set of concepts and relations for modelling AI use cases and their associated risks. AIRO has been developed according to the requirements of the EU AI Act and international standards, including ISO/IEC 23894 on AI risk management and ISO 31000 family of standards.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

TechnicalSwitzerlandEuropean UnionUploaded on Jan 24, 2025
COMPL-AI is an open-source compliance-centered evaluation framework for Generative AI models

EducationalUnited KingdomUploaded on Dec 9, 2024
Newton’s Tree’s Federated AI Monitoring Service (FAMOS) is a dashboard for real-time monitoring of healthcare AI products. The dashboard is designed to enable users to observe and monitor the quality of data that goes into the AI, changes to the outputs of the AI, and developments in how healthcare staff use the product.

Related lifecycle stage(s)

Operate & monitorDeploy

EducationalUnited KingdomUploaded on Dec 9, 2024
PLIM is designed to make benchmarking and continuous monitoring of LLMs safer and more fit for purpose. This is particularly important in high-risk environments, e.g. healthcare, finance, insurance and defence. Having community-based prompts to validate models as fit for purpose is safer in a world where LLMs are not static. 

Objective(s)


TechnicalFranceUploaded on Dec 6, 2024
AIxploit is a tool designed to evaluate and enhance the robustness of Large Language Models (LLMs) through adversarial testing. This tool simulates various attack scenarios to identify vulnerabilities and weaknesses in LLMs, ensuring they are more resilient and reliable in real-world applications.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

TechnicalUploaded on Dec 6, 2024
Continuous proactive AI red teaming platform for AI and GenAI models, applications and agents.

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

EducationalUnited StatesUploaded on Nov 6, 2024<1 day
The deck of 50 Trustworthy AI Cards™ correspond to 50 most relevant concepts under the 5 categories of: Data, AI, Generative AI, Governance, and Society. The Cards are used to create awareness and literacy on opportunities and risks about AI, and how to govern these technologies.

ProceduralFranceUploaded on Oct 25, 2024
Online tool for estimating the carbon emissions generated by AI model usage.

Related lifecycle stage(s)

Plan & design

ProceduralUnited KingdomUploaded on Oct 2, 2024
Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness.

Objective(s)

Related lifecycle stage(s)

Operate & monitorVerify & validate

ProceduralUploaded on Oct 2, 2024
FairNow is an AI governance software tool that simplifies and centralises AI risk management at scale. To build and maintain trust with customers, organisations must conduct thorough risk assessments on their AI models, ensuring compliance, fairness, and security. Risk assessments also ensure organisations know where to prioritise their AI governance efforts, beginning with high-risk models and use cases.

Objective(s)


ProceduralJapanUploaded on Sep 9, 2024
The General Understanding on AI and Copyright, released by Japan Copyright Office aims at providing clarity on how Japan's current Copy Right Act should be applied in relation to AI technologies. Three main topics presented include: the training stage of AI developments, the generation, utilisation stage, and copyrightability of AI-generated material.

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.