Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

OneTrust AI Governance



OneTrust AI Governance acts as a unified program center for AI initiatives. By automating continuous governance across the AI lifecycle, OneTrust empowers enterprises to adopt and deploy AI with confidence—balancing innovation with accountability and maximizing AI’s business impact. ​

  • How OneTrust AI Governance works:
     
  • 1. Build your inventory: Manage AI projects, track proprietary and open-source models, govern training datasets, and gain visibility into your organization's AI usage and development through standardized intake mechanisms, project management integrations, and AI/ML scanning. 
    2. Evaluate AI risk: Facilitate trust throughout AI system onboarding, development, and delivery. Streamline risk-informed decision-making across compliance, risk management, and technology stakeholders, ensuring alignment with global laws, standards, and organizational policies within a secure, centralized platform. 
    3. Monitor AI Systems: Promote collaboration among AI governance committees and technical stakeholders. Control and monitor AI/ML technologies within a centralized program command center that captures insights, detects and addresses deviations from intended AI use, and enforces policies across your AI ecosystem. 
  • 4. Demonstrate Trust: Build trust and provide transparency to key stakeholders including customers, board members, employees and regulators with actionable insights, dynamic dashboards, and model cards. 

 

OneTrust AI Governance sits within the broader OneTrust platform. OneTrust platform empowers users to collect, govern, and use data with complete visibility and control. We help customers streamline risk management, enforce compliance, and optimize data strategies for innovation - all while meeting regulatory and customer demands.

 

About the tool






Lifecycle stage(s):


Type of approach:





Stakeholder group:


Tags:

  • building trust with ai
  • data governance
  • demonstrating trustworthy ai
  • ai assessment
  • ai governance
  • ai auditing
  • transparency
  • ai risk management
  • ai compliance
  • ai register
  • risk register
  • regulation compliance

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.