These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The Bias Assessment Metrics and Measures Repository
Currently, there are no repositories that can help in efficiently determining the appropriate metrics or measures to adopt for assessing or evaluating bias in an AI project or AI system. For instance, metrics and measures for audio data used in the recommendation engine are not easily determinable for different components of the AI system and different stages in the process.
The Bias Assessment Metrics and Measures project aims to compile a comprehensive repository of metrics, measures, and thresholds that AI practitioners can use to assess bias across various stages of the AI development lifecycle. Bias in AI systems can lead to unjust and discriminatory outcomes, making it imperative to have a repository of relevant metrics and measures to assess and address bias effectively. The project will cover a wide range of data types, model types, process stages, and project components to provide a holistic approach to Bias Assessment.
You can access the database here - https://shorturl.at/lKNV8
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Geographical scope:
Required skills:
Tags:
- bias mitigation
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case