Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Governance framework

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

Submit
Tool type Governance framework

TechnicalUploaded on Oct 26, 2023>1 year
Monitaur is the premier AI governance solution for the insurance industry that helps companies use AI systems that businesses, regulators, and consumers can trust. The company delivers software and expertise that help insurers and their partners define, manage, and automate fundamental best practices throughout the modeling project lifecycle. With Monitaur, companies can accelerate innovation with clarity and confidence in the transparency, performance, fairness, safety, and compliance of their AI systems.

ProceduralMexicoSpainChileColombiaUruguayUploaded on Oct 19, 2023>1 year
The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.

EducationalUnited KingdomUploaded on Oct 6, 2023
Provides the regulatory framework for incorporating rights, freedoms, and obligations relevant to work and people's experience of it, in particular technology specific guidance.

TechnicalProceduralUnited StatesUploaded on Oct 3, 2023
Comprehensive AI governance and compliance platform focused on empowering organizations with the risk management capabilities they need to adopt AI swiftly and responsibly.

TechnicalProceduralUnited StatesUploaded on Oct 2, 2023
Automate, simplify, and streamline your end-to-end AI risk management process.

TechnicalBelgiumUploaded on Jun 16, 2023
Justifai is an AI platform enabling business users to build trustworthy AI solutions fast, cost-effective and with minimal compliance risks

ProceduralUploaded on May 4, 2023
Responsible AI License Agreement developed by the BigCode research project. This agreement is specifically designed to share machine learning models on a royalty-free basis while setting specific use restrictions promoting the responsible use of the model.

Objective(s)

Related lifecycle stage(s)

Deploy

TechnicalEducationalProceduralUploaded on Apr 17, 2023<1 day
AI assurance as smart as your AI systems

EducationalProceduralUploaded on Apr 13, 2023
This app matches information in model cards to proposed regulatory compliance descriptions in the EU AI Act. This is a prototype to explore the feasibility of automatic checks for compliance, and is limited to specific provisions of Article 13 of the Act, “Transparency and provision of information to users”.

Related lifecycle stage(s)

Deploy

TechnicalEducationalUploaded on Apr 12, 2023
An AI tool that allows you to understand complex civic issues by listening to the perceptions and concerns of millions of citizens in Latin America and the Caribbean in real time.

ProceduralUploaded on Apr 5, 2023<1 day
FRAIA assesses the risks to human rights posed by algorithms and promotes measures to address them. It fosters dialogue among professionals involved in algorithm development or deployment. The client is accountable for implementing FRAIA to prevent uncertain outcomes of algorithm use. FRAIA mitigates risks of carelessness, ineffectiveness, or violation of citizens' rights.

EducationalProceduralUploaded on Mar 30, 2023<1 day
The framework proposes a self-regulated approach that incorporates certain tools (or instruments) to facilitate ethical and responsible AI development, drawing on insights from the EU Regulation on Artificial Intelligence (AI Act)

TechnicalProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

TechnicalUploaded on Mar 27, 2023
The AI Governance Copilot enabling trustworthy and compliant AI solutions

TechnicalUploaded on Mar 27, 2023<1 day
All-in-one Trustworthy AI Platform

TechnicalEducationalProceduralUploaded on Apr 4, 2023<1 day
The Human-Ai Paradigm for Ethics, Conduct and Risk (HAiPECR)

Objective(s)

Related lifecycle stage(s)

Verify & validate

TechnicalEducationalProceduralUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralUploaded on Mar 27, 2023
Standard Bidding Terms for algorithms and artificial intelligence with ethical requirements

TechnicalProceduralChileUploaded on Mar 27, 2023
This guide is a valuable support for planning, formulating and developing decision making or decision support systems based on artificial intelligence or algorithmic models in the public sector, preventing discrimination biases, lack of transparency or misuse of personal data, among other potential problems.

ProceduralUploaded on Mar 20, 2023
This playbook is intended as a practical tool to help organisations consider how to ethically design, develop and deploy artificial intelligence (AI) systems.

Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.