These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.
In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests. We illustrate our notion using a case study of FICO credit scores.
Trustworthy AI Relevance
This metric addresses Fairness and Robustness by quantifying relevant system properties. Fairness: EOD explicitly quantifies whether different demographic groups receive equal access to positive outcomes by comparing true positive rates (TPR) across groups. This directly supports the Fairness objective by identifying and measuring discriminatory performance differences, informing mitigation (re-weighting, constrained optimization, post-processing) and certification efforts. Robustness: EOD also relates to system consistency and reliability across subpopulations.
Related use cases :
Estimating and Improving Fairness with Adversarial Learning
Uploaded on Oct 25, 2022Fairness and accountability are two essential pillars for trustworthy Artificial Intelligence (AI) in healthcare. However, the existing AI model may be biased in its decision m...
About the metric
You can click on the links to see the associated metrics
Objective(s):
Target sector(s):
Lifecycle stage(s):
Target users:



























