These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The demographic disparity metric (DD) determines whether a facet has a larger proportion of the rejected outcomes in the dataset than of the accepted outcomes. In the binary case where there are two facets, men and women for example, that constitute the dataset, the disfavored one is labelled facet d and the favored one is labelled facet a. For example, in the case of college admissions, if women applicants comprised 46% of the rejected applicants and comprised only 32% of the accepted applicants, we say that there is demographic disparity because the rate at which women were rejected exceeds the rate at which they are accepted. Women applicants are labelled facet d in this case. If men applicants comprised 54% of the rejected applicants and 68% of the accepted applicants, then there is not a demographic disparity for this facet as the rate of rejection is less that the rate of acceptance. Men applicants are labelled facet a in this case.
Trustworthy AI Relevance
This metric addresses Fairness and Data Governance & Traceability by quantifying relevant system properties. CDD directly supports Fairness because it quantifies group-level disparities in outcomes after controlling for relevant covariates (e.g., qualifications, risk factors), which helps detect residual discrimination that unconditional metrics may miss. It supports Data Governance & Traceability because implementing CDD requires well-documented data definitions, provenance of conditioning variables, and reproducible audit procedures — making the measurement part of an auditable fairness governance process.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Risk management stage(s):



























