Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

TechnicalUploaded on Dec 11, 2023
In this work, we proposed a two-stage framework, which produces debiased representations by using a fairness constraint adversarial framework in the first stage.

Objective(s)


TechnicalUnited KingdomJapanEuropeUploaded on Nov 10, 2023<1 hour
Intersectional Fairness (ISF) is a bias detection and mitigation technology developed by Fujitsu for intersectional bias, which is caused by the combinations of multiple protected attributes. ISF is hosted as an open source project by the Linux Foundation.

Uploaded on Jun 8, 2023
Comparing fairness-aware machine learning techniques.

Objective(s)


Uploaded on Jun 8, 2023
An open-source NLP research library, built on PyTorch.

Objective(s)

Related lifecycle stage(s)

Build & interpret model

ProceduralUploaded on May 23, 2023
The items in this checklist are intended to be used as a starting point for teams to customize. Not all items will be applicable to all AI systems, and teams will likely need to add, revise, or remove items to better fit their specific circumstances. Undertaking the items in this checklist will not guarantee fairness. The items are intended to prompt discussion and reflection. Most items can be undertaken in multiple different ways and to varying degrees.

Objective(s)


ProceduralFranceUploaded on Mar 20, 2023
Fairness metric selection tool for AI applications.

Objective(s)


TechnicalUnited StatesUploaded on Feb 22, 2022

An operational framework to identify and mitigate bias at different stages of an industry ML project workflow. SIFT enables an industrial ML team to define, document, and maintain their project’s bias history. SIFT guides a team via mechanized and human components to monitor fairness issues in all parts of a project’s workflow. Longitudinally, SIFT lowers […]


TechnicalUnited StatesUploaded on Sep 15, 2022

The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.

Objective(s)


TechnicalUploaded on Feb 22, 2022

This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of […]

Objective(s)


TechnicalBrazilUploaded on Sep 9, 2022

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Objective(s)


TechnicalUnited StatesUploaded on Sep 9, 2022

A curated list of awesome Fairness in AI resources

Objective(s)


TechnicalProceduralUploaded on Feb 23, 2022

Aequitas is an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making […]

Objective(s)


TechnicalUploaded on Feb 22, 2022

Fairness Flow is a technical toolkit that enables teams to analyze how some types of AI models and labels perform across different groups. Fairness Flow is a diagnostic tool, so it can’t resolve fairness concerns on its own — that would require input from ethicists and other stakeholders, as well as context-specific research. However, Fairness […]

Objective(s)


TechnicalUploaded on Sep 9, 2022
Tensorflow's Fairness Evaluation and Visualization Toolkit

TechnicalNetherlandsUploaded on Sep 9, 2022

This repository contains the full code for the ‘Towards fairness in machine learning with adversarial networks’ blog post.

Objective(s)


TechnicalUploaded on Sep 9, 2022

Modular Python Toolbox for Fairness, Accountability and Transparency Forensics


TechnicalUploaded on Sep 15, 2022

” ”

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.