These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Trubrics
Trubrics enables organisations to include business teams into the ML lifecycle, with a solution centred around 4 key pillars:
- Feedback: Collect qualitative and quantitative feedback on your ML datasets / models, directly from your users / business teams.
- Measure: Measure ML adoption rates, and identify issues.
- Validate: Build ML validations by combining domain expert feedback with data science knowledge.
- Resolve: Track, audit and manage feedback & validations throughout your ML projects.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Stakeholder group:
Validity:
Geographical scope:
People involved:
Required skills:
Tags:
- human-ai
- ai ethics
- ai incidents
- ai responsible
- biases testing
- building trust with ai
- collaborative governance
- demonstrating trustworthy ai
- documentation
- evaluate
- evaluation
- model
- model cards
- responsible ai collaborative
- transparent
- trustworthy ai
- validation of ai model
- ai assessment
- python
- ai governance
- ai reliability
- ai auditing
- machine learning testing
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case