These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
In the field of health, equal patient outcomes refers to the assurance that protected groups have equal benefit in terms of patient outcomes from the deployment of machine-learning models. A weak form of equal outcomes is ensuring that both the protected and non-protected groups benefit similarly from a model (equal benefit); a stronger form is making sure that both groups benefit and any outcome disparity is lessened (equalized outcomes). Ensuring equal outcomes is the most critical aspect of fairness and can be advanced by interventions proactively designed to reduce disparities (34, 35). It may be hard to know in advance, though, if any well-intentioned general, non-tailored intervention, whether a quality improvement project or a machine-learning system, might disproportionately harm or benefit a protected group. However, besides equal outcomes, other options that might advance health equity can be analyzed and addressed prospectively.
Trustworthy AI Relevance
This metric addresses Fairness and Human Agency & Control by quantifying relevant system properties. Fairness: 'Equal outcomes' directly measures whether an AI system produces equivalent results (decisions, benefits, harms, or error rates) across defined demographic or protected groups — this maps directly to canonical fairness objectives such as demographic parity or equalized odds and is used to detect and mitigate discriminatory outcomes. Human Agency & Control: ensuring equal outcomes helps protect affected individuals' autonomy and life opportunities by preventing systematic disadvantage; measuring and enforcing outcome parity supports users' ability to rely on AI-enabled decisions and preserves societal trust and control.
About the metric
You can click on the links to see the associated metrics
Objective(s):
Lifecycle stage(s):
Risk management stage(s):



























