These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Holistic AI Open Source Library
The Holistic AI library is an open-source tool to assess and improve the trustworthiness of AI systems. The current version of the library offers a set of techniques to easily measure and mitigate Bias across a variety of tasks. However, the library continues to evolve with techniques to measure and mitigate AI risks across further technical risk verticals due to be provided in future.
The library, provided as a Python module that can easily be downloaded and installed, is supported by both documentation that details each method contained in the and Jupyter notebooks which guide the user through example implementations respectively.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai ethics
- ai responsible
- ai risks
- biases testing
- demonstrating trustworthy ai
- digital ethics
- trustworthy ai
- bias
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case