Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT

ProceduralUploaded on Mar 26, 2024
These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.

ProceduralSaudi ArabiaUploaded on Mar 26, 2024
LLM Survey for responsible and transparent and safe AI covering international compliance regulations and data/ model evaluations

EducationalUploaded on Mar 14, 2024
Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

ProceduralUnited KingdomUploaded on Feb 20, 2024
This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s Privacy Impact Risk Assessment System™ identifies the potential privacy impact risks which arise when using remote biometric AI systems including facial recognition technology. The system creates automated reports which recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonized standards, and good practice.

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

TechnicalProceduralGermanyUploaded on Feb 19, 2024
Casebase is a platform for portfolio management of data analytics & AI use cases. It supports companies in systematically developing their ideas in the field of artificial intelligence, documenting the development process and managing their data & AI roadmap. Particularly in the context of AI governance and the EU AI Act, Casebase helps to manage the risks of AI systems over their entire life cycle.

ProceduralUnited KingdomUploaded on Jan 24, 2024
Ten core principles for generative AI use in government and public sector organisations.

Objective(s)

Related lifecycle stage(s)

Plan & design

ProceduralSingaporeUploaded on Jan 24, 2024
This Model AI Governance Framework for Generative AI therefore seeks to set forth a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation.

ProceduralUploaded on Jan 24, 2024
This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.

Objective(s)

Related lifecycle stage(s)

Build & interpret modelPlan & design

TechnicalUploaded on Jan 22, 2024
The Algorithmic Transparency Certificate from Adigital offers a compliance solution for all those organizations that use AI systems in their daily activity, thus reinforcing confidence in these systems and technologies thanks to a well-understood transparency.

TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024
This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.

ProceduralKoreaUploaded on Jan 17, 2024
This tool serves as guidelines that can be used as a reference material for stakeholders such as data scientists and AI model developers working in the field of AI product and service development, from a practical perspective to ensure the trustworthiness of AI. The guidelines presents 15 development requirements and 67 verification items that can be checked.

TechnicalFranceUploaded on Dec 15, 2023
Efficient, scalable and enterprise-grade CPU/GPU inference server for Hugging Face transformer models.

TechnicalChinaUploaded on Dec 15, 2023
Text classification models implemented in Keras, including: FastText, TextCNN, TextRNN, TextBiRNN, TextAttBiRNN, HAN, RCNN, RCNNVariant, etc.

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalChinaUploaded on Dec 15, 2023
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.

TechnicalUploaded on Dec 15, 2023
Repository for PyImageSearch Crash Course on Computer Vision and Deep Learning

TechnicalUnited StatesUploaded on Dec 15, 2023
A C++ & Python viewer for 3D data like meshes and point clouds

TechnicalSwitzerlandUploaded on Dec 15, 2023
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.