Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Fairness

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Fairness

TechnicalUploaded on Aug 2, 2024
Responsible AI (RAI) Repairing Assistant

ProceduralNew ZealandUploaded on Jul 11, 2024
The Algorithm Charter for Aotearoa New Zealand is a set of voluntary commitments developed by Stats NZ in 2020 to increase public confidence and visibility around the use of algorithms within Aotearoa New Zealand’s public sector. In 2023, Stats NZ commissioned Simply Privacy to develop the Algorithm Impact Assessment Toolkit (AIA Toolkit) to help government agencies meet the Charter commitments. The AIA Toolkit is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms.

TechnicalProceduralSpainUploaded on May 21, 2024
LangBiTe is a framework for testing biases in large language models. It includes a library of prompts to test sexism / misogyny, racism, xenophobia, ageism, political bias, lgtbiq+phobia and religious discrimination. Any contributor may add new ethical concerns to assess.


TechnicalUnited StatesUploaded on Apr 22, 2024
XLNet: Generalized Autoregressive Pretraining for Language Understanding

Objective(s)


TechnicalUnited StatesUploaded on Apr 22, 2024
Wild Me's first product, Wildbook supports researchers by allowing collaboration across the globe and automation of photo ID matching

TechnicalUnited KingdomUploaded on Apr 22, 2024
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.

TechnicalUploaded on Apr 22, 2024
The open big data serving engine. https://vespa.ai

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

TechnicalProceduralIsraelUploaded on Apr 11, 2024
Citrusx offers a multifaceted solution to connect all stakeholders in the company through an SDK, user-friendly UI, and automated reporting system.

TechnicalFranceUploaded on Apr 2, 2024
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀

TechnicalGermanyUploaded on Apr 2, 2024
🧙 A web app to generate template code for machine learning

TechnicalFranceUploaded on Apr 2, 2024
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)

Objective(s)



TechnicalPhilippinesUploaded on Apr 2, 2024
This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.

TechnicalCanadaUploaded on Apr 2, 2024
Signal forecasting with a Sequence-to-Sequence (seq2seq) Recurrent Neural Network (RNN) model in TensorFlow - Guillaume Chevalier

EducationalUploaded on Apr 2, 2024<1 hour
Approaches to disability-centered data, models, systems oversight

ProceduralUploaded on Mar 26, 2024
These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.

EducationalUploaded on Mar 14, 2024
Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.