Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Respect of human rights

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Respect of human rights

ProceduralUnited StatesUploaded on Sep 10, 2024
The Risk Management Profile for Artificial Intelligence and Human Rights serves as a practical guide for organisations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.

ProceduralUploaded on Jul 2, 2024
This document introduces the effects of population demographics on biometric functions.


TechnicalUnited StatesUploaded on Apr 22, 2024
Wild Me's first product, Wildbook supports researchers by allowing collaboration across the globe and automation of photo ID matching

TechnicalUnited KingdomUploaded on Apr 22, 2024
JAX implementation of OpenAI's Whisper model for up to 70x speed-up on TPU.

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

TechnicalFranceUploaded on Apr 2, 2024
Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀

TechnicalGermanyUploaded on Apr 2, 2024
🧙 A web app to generate template code for machine learning


TechnicalFranceUploaded on Apr 2, 2024
Interpretability Methods for tf.keras models with Tensorflow 2.x

TechnicalUploaded on Apr 2, 2024
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

TechnicalUploaded on Apr 29, 2024
A repository to quickly generate synthetic data and associated trojaned deep learning models

TechnicalPhilippinesUploaded on Apr 2, 2024
This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.

TechnicalCanadaUploaded on Apr 2, 2024
Signal forecasting with a Sequence-to-Sequence (seq2seq) Recurrent Neural Network (RNN) model in TensorFlow - Guillaume Chevalier

ProceduralUnited KingdomUploaded on Feb 20, 2024
This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s Privacy Impact Risk Assessment System™ identifies the potential privacy impact risks which arise when using remote biometric AI systems including facial recognition technology. The system creates automated reports which recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonized standards, and good practice.

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

TechnicalKoreaUploaded on Dec 15, 2023
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)



catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.