Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

Clear all

Clear all

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

EducationalMaltaUploaded on Sep 1, 2025<1 day
This is a complete workshop package for the teaching of practical governance tools when using AI in collaborative teams. It covers the theoretical background, the facilitator notes for each phase, and the student workbook.

ProceduralItalyUploaded on Jun 19, 2025
ADMIT is a research tool within a broader methodological framework combining quantitative and qualitative strategies to identify, analyse, and mitigate social implications associated with automated decision-making systems while enhancing their potential benefits. It supports comprehensive assessments of sociotechnical impacts to inform responsible design, deployment, and governance of automation technologies.

TechnicalUnited StatesUploaded on May 15, 2025
The GDA leverages aerial imagery, satellite data, and machine learning techniques to evaluate the damage in areas impacted by natural disasters. This tool greatly enhances the efficiency and precision of disaster response operations.

ProceduralFranceUploaded on Mar 31, 2025
PolicyPilot is designed to assist users in creating and managing AI policies, streamlining AI governance with automated compliance monitoring and risk management.

Objective(s)


TechnicalUnited StatesUploaded on May 19, 2025
HiddenLayer’s AISec Platform is a GenAI Protection Suite purpose-built to ensure the integrity of AI models throughout the MLOps pipeline. The platform provides detection and response for GenAI and traditional AI models to detect prompt injections, adversarial AI attacks, and digital supply chain vulnerabilities.

TechnicalSwitzerlandEuropean UnionUploaded on Jan 24, 2025
COMPL-AI is an open-source compliance-centered evaluation framework for Generative AI models

ProceduralJapanUploaded on Sep 9, 2024
The General Understanding on AI and Copyright, released by Japan Copyright Office aims at providing clarity on how Japan's current Copy Right Act should be applied in relation to AI technologies. Three main topics presented include: the training stage of AI developments, the generation, utilisation stage, and copyrightability of AI-generated material.

Objective(s)


ProceduralNew ZealandUploaded on Jul 11, 2024
The Algorithm Charter for Aotearoa New Zealand is a set of voluntary commitments developed by Stats NZ in 2020 to increase public confidence and visibility around the use of algorithms within Aotearoa New Zealand’s public sector. In 2023, Stats NZ commissioned Simply Privacy to develop the Algorithm Impact Assessment Toolkit (AIA Toolkit) to help government agencies meet the Charter commitments. The AIA Toolkit is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms.

ProceduralUploaded on May 27, 2024
PRIDAR (Prioritization, Research, Innovation, Development, Analysis, and Review) A Risk Management Framework

EducationalUploaded on Apr 2, 2024<1 hour
Approaches to disability-centered data, models, systems oversight

ProceduralUnited KingdomUploaded on Feb 20, 2024
This internationally recognised AI Governance framework provides boards and senior leaders with signposts to high level areas of accountability through twelve principles which could be at the heart of any AI governance policy. It is for boards who wish to start their AI journey, or for those who recognise that AI governance is a key enabler for AI success.

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s Privacy Impact Risk Assessment System™ identifies the potential privacy impact risks which arise when using remote biometric AI systems including facial recognition technology. The system creates automated reports which recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonised standards, and good practice.

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

TechnicalEducationalProceduralSwedenSlovakiaCanadaFinlandUploaded on Nov 13, 2023
Use Saidot to empower AI teams to do high-quality AI governance efficiently and with quality

EducationalProceduralUploaded on Sep 29, 2023
Human rights impact assessments of AI systems require meaningful engagement of individuals and groups most affected.It is particularly relevant for products and services using AI because there is still not enough practical guidance on how to involve stakeholders in the design, development and deployment of AI systems.

TechnicalEducationalProceduralUploaded on Jul 4, 2023<1 hour
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

TechnicalUploaded on Apr 20, 2023<1 day
CounterGen is a framework for auditing and reducing bias in natural language processing (NLP) models such as generative models (e.g. ChatGPT, GPT-J, GPT-3 etc.) or classification models (e.g. BERT). It does so by generating balanced datasets, evaluating the behavior of NLP models, and directly editing the internals of the model to reduce bias.


TechnicalProceduralUploaded on Apr 13, 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

TechnicalEducationalUploaded on Apr 12, 2023
An AI tool that allows you to understand complex civic issues by listening to the perceptions and concerns of millions of citizens in Latin America and the Caribbean in real time.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.