These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
IEEE CertifAIEd
The IEEE CertifAIEd™ Program offers a risk-based framework supported by a suite of AI ethical criteria that can be contextualized to fit organizations’ needs– helping them to deliver a more trustworthy experience for their users.
IEEE CertifAIEd Ontological Specifications for Ethical Privacy, Algorithmic Bias, Transparency, and Accountability are an introduction to our AI Ethics criteria. We invite you to fill out the form to start receiving these specifications.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Use Cases

Auditing a City of Vienna AI solution with a trustworthy AI framework
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case