Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Human wellbeing

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Human wellbeing

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Objective(s)

Related lifecycle stage(s)

Plan & design

EducationalUploaded on Apr 2, 2024<1 hour
Approaches to disability-centered data, models, systems oversight

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024
This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

Uploaded on Dec 14, 2023
Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

ProceduralCanadaUploaded on Nov 14, 2023
In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve outcomes related to the OECD AI principles.

ProceduralUploaded on Oct 26, 2023
The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).

Objective(s)


ProceduralUploaded on Oct 26, 2023
The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.

Objective(s)


ProceduralUploaded on Oct 26, 2023
The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.

Objective(s)


ProceduralUnited KingdomUploaded on Oct 6, 2023
This report proposes a model for Equality Impact Assessment of AI tools. This builds on the paper research which finds that existing fairness and bias auditing solutions are inadequate to ensuring compliance with UK Equalities legislation.

EducationalUnited KingdomUploaded on Oct 6, 2023
A toolkit for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.

Objective(s)


ProceduralUploaded on Oct 6, 2023
Guidance aimed at encouraging employee led design and development of algorithmic systems in the workplace.

EducationalUnited KingdomUploaded on Oct 6, 2023
Provides the regulatory framework for incorporating rights, freedoms, and obligations relevant to work and people's experience of it, in particular technology specific guidance.

Objective(s)


Uploaded on Sep 14, 2023<1 day
FAIRLY provides an AI governance platform focussed on accelerating the broad use of fair and responsible AI by helping organisations bring safer AI models to market.

ProceduralUnited KingdomUploaded on Sep 11, 2023>1 year
The Trustworthy and Ethical Assurance platform is an open-source tool and framework to support the process of developing and communicating trustworthy and ethical assurance cases for data-driven technologies.

Objective(s)

Related lifecycle stage(s)

DeployVerify & validatePlan & design

United KingdomUploaded on Sep 11, 2023
Using the latest advances in ethical artificial intelligence, CESIUM supports risk-assessment for safeguarding the most vulnerable children in society.

Objective(s)

Related lifecycle stage(s)

Verify & validateBuild & interpret model

EducationalUploaded on Jun 8, 2023
The independent, open, public interest resource detailing incidents and controversies driven by and relating to artificial intelligence, algorithms, and automation

ProceduralUploaded on May 23, 2023
The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

Objective(s)


ProceduralUploaded on May 23, 2023
Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure and reliable.

Objective(s)


ProceduralUploaded on May 22, 2023
The goal of AI4People is to create a common public space for laying out the founding principles, policies and practices on which to build a “good AI society”.

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.