These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!
SubmitEticas Bias
TechnicalUnited StatesUploaded on Mar 24, 2025Objective(s)
Unsupervised bias detection tool
TechnicalNetherlandsUploaded on Nov 29, 2023Objective(s)
Related lifecycle stage(s)
Verify & validateBiaslyze - The NLP Bias Identification Toolkit
TechnicalProceduralGermanyUploaded on Sep 7, 2023>1 yearObjective(s)
PAM by Palqee
TechnicalUploaded on Aug 28, 2023>1 yearObjective(s)
Black Box Auditing and Certifying and Removing Disparate Impact
TechnicalUploaded on May 23, 2023Objective(s)
CounterGen
TechnicalUploaded on Apr 20, 2023<1 dayObjective(s)
Related lifecycle stage(s)
Operate & monitorVerify & validateBuild & interpret modelCollect & process dataUnsupervised bias scan tool
TechnicalNetherlandsSwedenUploaded on Mar 27, 2023>1 yearObjective(s)
Holistic AI Bias Audits
TechnicalProceduralUploaded on Mar 27, 2023<1 dayObjective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designHolistic AI Audits
TechnicalProceduralUploaded on Mar 27, 2023<1 dayObjective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designGerryFair
TechnicalUnited StatesUploaded on Sep 20, 2022library for fair auditing and learning of classifiers with respect to rich subgroup fairness.
Objective(s)
Resources on fairness
TechnicalUnited StatesUploaded on Sep 9, 2022A curated list of awesome Fairness in AI resources
Objective(s)
FairVis
TechnicalUnited StatesUploaded on Mar 17, 2022FairVis is a visual analytics system that allows users to audit their classification models for intersectional bias. Users can generate subgroups of their data and investigate if a model is underperforming for certain populations.
Objective(s)
FairTest
TechnicalUnited StatesUploaded on Mar 17, 2022FairTest enables developers or auditing entities to discover and test for unwarranted associations between an algorithm’s outputs and certain user subpopulations identified by protected features. FairTest works by learning a special decision tree, that splits a user population into smaller subgroups in which the association between protected features and algorithm outputs is maximized. FairTest supports and makes use of a variety of different fairness metrics each […]
Objective(s)
Audit-AI (Bias Testing for Generalized Machine Learning Applications)
TechnicalUploaded on Feb 23, 2022Open Sourced Bias Testing for Generalized Machine Learning Applications audit-AI is a Python library built on top of pandas and sklearn that implements fairness-aware machine learning algorithms. audit-AI was developed by the Data Science team at pymetrics
Objective(s)
Aequitas:Bias and Fairness Audit Toolkit
TechnicalProceduralUploaded on Feb 23, 2022Aequitas is an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making […]
Objective(s)



























