These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Type
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SubmitAchieving Fair Speech Emotion Recognition via Perceptual Fairness
TechnicalUploaded on Dec 11, 2023Objective(s)
Intersectional Fairness
TechnicalUnited KingdomJapanEuropeUploaded on Nov 10, 2023<1 hourObjective(s)
fairness-comparison
Uploaded on Jun 8, 2023Objective(s)
allennlp.fairness
Uploaded on Jun 8, 2023Objective(s)
Related lifecycle stage(s)
Build & interpret modelAI Fairness Checklist
ProceduralUploaded on May 23, 2023Objective(s)
Fairness Compass
ProceduralFranceUploaded on Mar 20, 2023Objective(s)
System to Integrate Fairness Transparently (SIFT): An Industry Approach
TechnicalUnited StatesUploaded on Feb 22, 2022An operational framework to identify and mitigate bias at different stages of an industry ML project workflow. SIFT enables an industrial ML team to define, document, and maintain their project’s bias history. SIFT guides a team via mechanized and human components to monitor fairness issues in all parts of a project’s workflow. Longitudinally, SIFT lowers […]
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designThe LinkedIn Fairness Toolkit (LiFT)
TechnicalUnited StatesUploaded on Sep 15, 2022The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
Objective(s)
IBM AI Fairness 360
TechnicalUploaded on Feb 22, 2022This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. Containing over 70 fairness metrics and 10 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of […]
Objective(s)
AI Fairness 360 (AIF360)
TechnicalBrazilUploaded on Sep 9, 2022A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
Objective(s)
Resources on fairness
TechnicalUnited StatesUploaded on Sep 9, 2022A curated list of awesome Fairness in AI resources
Objective(s)
Aequitas:Bias and Fairness Audit Toolkit
TechnicalProceduralUploaded on Feb 23, 2022Aequitas is an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making […]
Objective(s)
Fairness Flow
TechnicalUploaded on Feb 22, 2022Fairness Flow is a technical toolkit that enables teams to analyze how some types of AI models and labels perform across different groups. Fairness Flow is a diagnostic tool, so it can’t resolve fairness concerns on its own — that would require input from ethicists and other stakeholders, as well as context-specific research. However, Fairness […]
Objective(s)
fairness-indicators
TechnicalUploaded on Sep 9, 2022Objective(s)
Fairness in Machine Learning
TechnicalNetherlandsUploaded on Sep 9, 2022This repository contains the full code for the ‘Towards fairness in machine learning with adversarial networks’ blog post.
Objective(s)
FAT Forensics: Algorithmic Fairness, Accountability and Transparency Toolbox
TechnicalUploaded on Sep 9, 2022Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Objective(s)
Bias Mitigation of predictive models using AI fairness 360 toolkit
TechnicalUploaded on Sep 15, 2022” ”
Objective(s)
