These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Unsupervised bias detection tool
The unsupervised bias detection tool aims to discover complex and hidden forms of bias. The tool is specifically geared towards detecting unforeseen forms of bias and higher-dimensional forms of bias. Aside from unfair biases with respect to established protected groups, such as gender, sexual orientation, and race, bias can also occur with respect to non-established and unexpected groups of people. These forms of bias are more difficult to detect for humans, especially when the unfairly treated group is defined by a high-dimensional mixture of features. Our bias detection tool is based on an unsupervised clustering method, which makes it capable of detecting these complex forms of bias. It thereby tackles the difficult problem of detecting proxy-discrimination that stems from unforeseen and higher-dimensional forms of bias, including intersectional forms of discrimination.
Informed by the quantitative results of our bias scan tool, subject-matter experts carry out an evaluative assessment in a deliberative way. The deliberative assessment provides a deeper understanding and evaluation of the detected bias. Such an assessment is essential for a balanced, normative assessment of the bias and its potential consequences in terms of social impact. This is because factual observation of quantitative discrepancies does not establish discrimination or unfair bias by itself. Instead, quantitative discrepancies can only serve as a starting point for evaluating the possibility of unfair treatment in a qualitative way, where the qualitative assessment takes into account the particular social context and the relevant legal doctrines. Human interpretation is thereby required to establish which statistical disparities do indeed qualify as unfair bias. The diversity of our commission of experts contributes to a context-sensitive and multi-perspectival evaluation of the particular use case under investigation. This expert-led, deliberative approach is commonly used by NGO Algorithm Audit to provide ethical guidance on issues that arise in concrete algorithmic use cases. The results of deliberative assessments are published in a transparent way. We thereby enable policy makers, journalists, data subjects and other stakeholders to review the normative judgements issued by the audit commissions of Algorithm Audit, thereby contributing to public knowledge building on the responsible use of AI.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Use Cases
Higher-dimensional bias in a BERT-based disinformation classifier
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case