These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SUBMITBlack Box Auditing and Certifying and Removing Disparate Impact
TechnicalUploaded on May 23, 2023Objective(s)
Holistic AI Bias Audits
TechnicalUploaded on Mar 27, 2023<1 dayObjective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designHolistic AI Audits
TechnicalUploaded on Mar 27, 2023<1 dayObjective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designAudit-AI (Bias Testing for Generalized Machine Learning Applications)
TechnicalUploaded on Feb 23, 2022Open Sourced Bias Testing for Generalized Machine Learning Applications audit-AI is a Python library built on top of pandas and sklearn that implements fairness-aware machine learning algorithms. audit-AI was developed by the Data Science team at pymetrics
Objective(s)
Aequitas:Bias and Fairness Audit Toolkit
TechnicalUploaded on Feb 23, 2022Aequitas is an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making […]
Objective(s)