These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Black Box Auditing and Certifying and Removing Disparate Impact
This repository contains a sample implementation of Gradient Feature Auditing (GFA) meant to be generalizable to most datasets. For more information on the repair process, see our paper on Certifying and Removing Disparate Impact. For information on the full auditing process, see our paper on Auditing Black-box Models for Indirect Influence.
About the tool
You can click on the links to see the associated tools
Objective(s):
Type of approach:
Use Cases
There is no use cases for this tool yet.
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case