Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Accountability

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Accountability

TechnicalEducationalUploaded on Apr 25, 2024
The AI Incident Database is a free and open-source project dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems.

Objective(s)

Related lifecycle stage(s)

Operate & monitor

TechnicalUploaded on Apr 18, 2024
Tool to scrape local business data from Google Maps

Objective(s)


ProceduralUploaded on Mar 26, 2024
These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.

Objective(s)


EducationalUploaded on Mar 14, 2024
Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

Objective(s)


ProceduralUnited KingdomUploaded on Feb 20, 2024
This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

TechnicalProceduralGermanyUploaded on Feb 19, 2024
Casebase is a platform for portfolio management of data analytics & AI use cases. It supports companies in systematically developing their ideas in the field of artificial intelligence, documenting the development process and managing their data & AI roadmap. Particularly in the context of AI governance and the EU AI Act, Casebase helps to manage the risks of AI systems over their entire life cycle.

ProceduralSingaporeUploaded on Jan 24, 2024
This Model AI Governance Framework for Generative AI therefore seeks to set forth a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation.

TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024
This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

Uploaded on Dec 14, 2023
Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

ProceduralCanadaUploaded on Nov 14, 2023
In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve outcomes related to the OECD AI principles.

EducationalProceduralSwitzerlandGermanyUploaded on Oct 31, 2023
How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch has developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

ProceduralUploaded on Oct 26, 2023
The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).

Objective(s)


ProceduralUploaded on Oct 26, 2023
The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.

Objective(s)


ProceduralUploaded on Oct 26, 2023
The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.

Objective(s)


TechnicalUploaded on Oct 26, 2023
Monitaur is a model governance software company that enables you to build repeatable patterns and requirements for model development success. Those policies, templates, and applications are then also aligned with regulatory requirements. We're a full model development lifecycle governance solution, with interfaces specifically for development teams, and then also for risk management teams so that as a system of record, you gain transparency, alignment, and greater execution of success in your investments in AI.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployVerify & validate

ProceduralUploaded on Oct 19, 2023>1 year
The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.

ProceduralUnited KingdomUploaded on Oct 6, 2023
This report proposes a model for Equality Impact Assessment of AI tools. This builds on the paper research which finds that existing fairness and bias auditing solutions are inadequate to ensuring compliance with UK Equalities legislation.

EducationalUnited KingdomUploaded on Oct 6, 2023
A toolkit for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.

Objective(s)


ProceduralUploaded on Oct 6, 2023
Guidance aimed at encouraging employee led design and development of algorithmic systems in the workplace.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.