Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Credo AI Responsible AI Governance Platform



Credo AI Responsible AI Governance Platform

The Credo AI Responsible AI Governance Platform capabilities are designed to help organizations ensure responsible development and use throughout the entire AI value chain. The Credo AI Platform makes it easy for organizations to assess their AI systems for risks related to fairness, performance, transparency, security, privacy, and more, and to produce standardized transparency artifacts including reports and documentation for AI/ML systems for internal AI governance reviews, external compliance requirements, independent audits or specialized customer requests.

One of the primary reasons why many organizations are struggling to implement RAI governance at scale is, first and foremost, the burden of governance activities on AI/ML development teams. The amount of time that it takes a technical team to run technical assessments which meet  compliance requirements  for areas like algorithmic fairness, security, and privacy is significant. Time spent generating artifacts for governance takes away from a technical team’s ability to build new ML models, slowing down an organization’s innovation cycle. Without the right tools to facilitate quick and easy RAI assessment and generation of technical artifacts that meet governance needs, data science teams are often reluctant to adopt new processes related to governance. Without these stakeholders’ buy-in, AI governance programs are much less likely to succeed.

Another key reason why it is difficult for organizations to stand up comprehensive AI governance programs at scale is standardization—or rather, the lack thereof. In many organizations, AI development teams are individually responsible for Responsible AI assessment and reporting, which results in a highly fragmented approach that makes it difficult to compare AI risk and compliance across different projects, and align them to business objectives. AI governance teams struggle to scale their activities across all of the AI/ML applications in development and use, because they need to “start governance from scratch” with every new AI use case under review.

The Responsible AI Governance Platform solves these two problems by enabling standardized, programmatic Responsible AI assessment and automated reporting based on Policy Packs. 

Credo AI Policy Packs encode regulations, laws, standards, guidelines, best practices, and an individual company’s proprietary policies into standardized assessment requirements and report templates, which make it easy for AI/ML development teams to produce the evidence and reports needed for AI governance. Our out-of-the-box Policy Packs provide everything a company needs to run technical assessments and generate reports in compliance with emerging reporting requirements—whether that company is  focused on complying with New York City’s algorithmic hiring law or the EU AI Act, Credo AI Policy Packs are the building blocks that companies need to get in compliance without burdening their technical teams.

The Responsible AI Governance Platform provides organizations with a complete toolset to streamline and standardize Responsible AI assessment and reporting across all of their AI/ML systems.

About the tool


Tool type(s):





Type of approach:





Stakeholder group:


Geographical scope:


Tags:

  • ai ethics
  • ai risks
  • biases testing
  • build trust
  • building trust with ai
  • data governance
  • demonstrating trustworthy ai
  • digital ethics
  • documentation
  • gpai
  • metrics
  • transparent
  • trustworthy ai
  • validation of ai model
  • chatgpt

Modify this tool

Use Cases

Credo AI Governance Platform: Reinsurance provider Algorithmic Bias Assessment and Reporting

Credo AI Governance Platform: Reinsurance provider Algorithmic Bias Assessment and Reporting

A global provider of reinsurance used Credo AI’s platform to produce standardised algorithmic bias reports to meet new regulatory requirements and customer requests.The team needed a way to streamline and standardise its AI risk and compliance assess...
Jun 5, 2024

Credo AI Policy Packs: Human Resources Startup compliance with NYC LL-144

Credo AI Policy Packs: Human Resources Startup compliance with NYC LL-144

In December 2021, the New York City Council passed Local Law No. 144 (LL-144), mandating that AI and algorithm-based technologies used for recruiting, hiring, or promotion are audited for bias before being used. The law also requires employers to ann...
Jun 5, 2024

Credo AI Transparency Reports: Facial Recognition application

Credo AI Transparency Reports: Facial Recognition application

A facial recognition service provider required trustworthy Responsible AI fairness reporting to meet critical customer demand for transparency. The service provider used Credo AI’s platform to provide transparency on fairness and performance evaluati...
Jun 5, 2024

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.