Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

TrustWorks - AI Governance module



TrustWorks - AI Governance module

Through AI Governance module, TrustWorks empowers responsible AI adoption and compliance. Any organisation can easily streamline the registration and classification of AI systems (including Shadow AI) and implement continuous risk assessment and mitigation systems to tackle vulnerabilities throughout the AI system’s lifecycle. The tool also facilitates meeting and exceeding transparency and reporting requirements and adhering to the latest governance frameworks.

  • Instant map of AI use cases: Identify and document all AI usage (systems, machine learning models and vendors) across the organisation in real-time and comply with transparency requirements and beyond.
  • AI risks classification: Classify AI systems based on the risk framework proposed by the AI Act to understand the applicable regulatory requirements.
  • Streamlined assessments:  Assess conformity for high-risk AI systems, and meet all the reporting compliance requirements with purpose-designed AI Act templates.
  • AI risk and incident management: Implement continuous risk assessment and mitigation to safeguard AI systems. Ensure safety, compliance, risk management and monitoring across all risk categories, with a focus on high-risk AI.
  • AI adoption and audit log: Assess and track the implementation of AI use cases, prioritising those with the highest business value. Build audit workflows and checklists with an easy-to-use drag-and-drop builder.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.