Catalogue of Tools & Metrics for Trustworthy AI
These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
trail
trail automates manual and time-intensive tasks of ML development to free time fore more projects. Generate automated documentation of code, models and data to increase knowledge sharing, reproducibility and compliance. Track experiments and store all artifacts in a central and accessible place.
trail integrates with a few lines of code into your favorite development and works for any model and data type.
Ship to production with confidence, knowing your model performs the way you intend through integrated tests and quality checks. trail identifies compliance gaps in your development, recommends suitable actions and refactors them to code.
Enabling your data science team to develop more, better and trustworthy AI. Makes you ready for AI audits and upcoming regulation.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- biases testing
- collaborative governance
- data documentation
- documentation
- ethical charter
- ai governance
- ai auditing
- fairness
- transparency
- auditablility
- auditing
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case