These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Giskard
Giskard is an Open Source software platform dedicated to enhancing the quality of AI. By minimizing AI errors such as ethical biases and security risks, Giskard allows for the creation of superior AI models that can be deployed to production. Our platform has an intuitive UI and fosters a collaborative approach that enables both technical and business roles to work together seamlessly.
Our platform is compatible with any model, from prototype to production, and can be used directly in your native environment. It includes an API for any Python model, such as PyTorch, TensorFlow, Transformers, sklearn, and others, and can be installed on your server or used as a cloud service.
We advocate for a collaborative approach in three steps:
- Inspect: review an AI model and receive feedback from business stakeholders on sensitive cases.
- Test: convert the feedback into executable tests for a secure deployment.
- Monitor: receive actionable alerts in the event of AI model errors during production.
Our platform helps organizations to create fair and trustworthy models, which is crucial for instilling confidence and trust in AI. Additionally, Giskard provides practical solutions for delivering robust models to customers, improving their quality.
We defend the use of responsible AI that prioritizes the business performance of companies while respecting the rights of citizens.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai ethics
- ai incidents
- ai responsible
- ai risks
- ai reliability
- fairness
- mlops
- ai quality
- performance
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case