These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Eticas Bias
This framework is designed to evaluate AI systems comprehensively across all lifecycle stages. At its core, it compares privileged and underprivileged groups, ensuring a fair evaluation of model behavior.
This framework, with its wide range of metrics focuses on bias monitoring. It offers a profound perspective on fairness, allowing for comprehensive reporting even without relying on true labels. The only restriction on measuring bias in production is performance metrics, as they are directly tied to rue labels.
The stages considered are the following:
- The dataset used to train the model.
- The dataset used in production.
- A dataset containing the system’s final decisions, which may include human intervention or another model.
- Demographic Benchmarking Monitoring: Perform in-depth analysis of population distribution.
- Model Fairness Monitoring: Ensure equality and detect equity issues in decision-making.
- Features Distribution Evaluation: Analyze correlations, causality, and variable importance.
- Performance Analysis: Metrics to assess model performance, accuracy, and recall.
- Model Drift Monitoring: Detect and measure changes in data distributions and model behavior over time.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Validity:
People involved:
Required skills:
Technology platforms:
Tags:
- ai auditing
- fairness
- bias
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case