Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Mitigating Bias in Artificial Intelligence



Mitigating Bias in Artificial Intelligence

AI bias is an unintentional underlying prejudice in data and algorithms that results in prejudices and unwanted discrimination. At a time when bias in AI has become one of the hottest debated topics, the playbook by The Center for Equity, Gender and Leadership at the Haas School of Business (University of California, Berkeley) outlines top-line information on bias in AI and the necessary steps that can be taken to address the bias. The end objective of the playbook is to mitigate bias in AI and unlock its true value in a responsible and equitable manner.

The playbook opens by stating an interesting fact: AI could contribute $15.7 trillion to the global economy by 2030. However, it quickly adds that bias in AI can restrict the full potential as it unfairly allocates opportunities and provides results that are discriminatory and inaccurate. The results may negatively impact a person’s well-being which in turn may cost businesses their reputation and trust.

The need to understand and mitigate bias in AI is being driven by various stakeholders that include academia, government, multilateral institutions and NGOs. The first step, however, is trying to understand the reason behind bias. AI algorithms are biased mainly because they are created by humans, who themselves are biased, albeit unwittingly. The creators of such algorithms may fail to integrate fair and ethical values into the system which ultimately affects the end product. The other possibilities are inadequate methods of data collection, generation and data labelling.

Having given a brief overview of the causes of bias and its impact on society and businesses, the report steers its way to identify the challenges often faced in the process of bias mitigation. The challenges have been categorized into organizational levels, industry-wide levels and societal levels and includes limitations such as lack of domain knowledge and accountability, lack of regulations and actionable guidance, persistence of black box algorithms and outdated education approaches for data and scientists.

About the tool


Objective(s):


Country of origin:


Type of approach:


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.