Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Unsupervised bias detection tool



Unsupervised bias detection tool

The unsupervised bias detection tool aims to discover complex and hidden forms of bias. The tool is specifically geared towards detecting unforeseen forms of bias and higher-dimensional forms of bias. Aside from unfair biases with respect to established protected groups, such as gender, sexual orientation, and race, bias can also occur with respect to non-established and unexpected groups of people. These forms of bias are more difficult to detect for humans, especially when the unfairly treated group is defined by a high-dimensional mixture of features. Our bias detection tool is based on an unsupervised clustering method, which makes it capable of detecting these complex forms of bias. It thereby tackles the difficult problem of detecting proxy-discrimination that stems from unforeseen and higher-dimensional forms of bias, including intersectional forms of discrimination.

Informed by the quantitative results of our bias scan tool, subject-matter experts carry out an evaluative assessment in a deliberative way. The deliberative assessment provides a deeper understanding and evaluation of the detected bias. Such an assessment is essential for a balanced, normative assessment of the bias and its potential consequences in terms of social impact. This is because factual observation of quantitative discrepancies does not establish discrimination or unfair bias by itself. Instead, quantitative discrepancies can only serve as a starting point for evaluating the possibility of unfair treatment in a qualitative way, where the qualitative assessment takes into account the particular social context and the relevant legal doctrines. Human interpretation is thereby required to establish which statistical disparities do indeed qualify as unfair bias. The diversity of our commission of experts contributes to a context-sensitive and multi-perspectival evaluation of the particular use case under investigation. This expert-led, deliberative approach is commonly used by NGO Algorithm Audit to provide ethical guidance on issues that arise in concrete algorithmic use cases. The results of deliberative assessments are published in a transparent way. We thereby enable policy makers, journalists, data subjects and other stakeholders to review the normative judgements issued by the audit commissions of Algorithm Audit, thereby contributing to public knowledge building on the responsible use of AI.

Use Cases

Higher-dimensional bias in a BERT-based disinformation classifier

Higher-dimensional bias in a BERT-based disinformation classifier

AbstractA normative advice commission, consisting of 6 AI experts from various disciplines, reviewed the results of the bias detection tool (BDT) applied to a self-trained BERT-based disinformation classifier on the Twitter1516 dataset. The commissio...
Mar 15, 2024

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.