Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Clear all

Robustness

Origin

Scope

Clear all

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

Submit
Approach Technical
Lifecycle stage(s) Collect & process data
Objective Robustness

TechnicalKoreaUploaded on Apr 29, 2024
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

Related lifecycle stage(s)

Collect & process dataPlan & design

TechnicalUnited StatesUploaded on Apr 22, 2024
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle

TechnicalUploaded on Apr 2, 2024
Model extraction attacks on Machine-Learning-as-a-Service platforms.

TechnicalUploaded on Apr 2, 2024
Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https://developer.yahoo.com/yql/

Related lifecycle stage(s)

Collect & process data

TechnicalUnited StatesUploaded on Apr 2, 2024
:satellite: Simple and ready-to-use tutorials for TensorFlow

Related lifecycle stage(s)

Collect & process data


TechnicalSingaporeUploaded on Dec 15, 2023
In this notebook we will explore a machine learning approach to find anomalies in stock options pricing.

TechnicalGermanyUploaded on Dec 15, 2023
Qdrant - High-performance, massive-scale Vector Database for the next generation of AI. Also available in the cloud https://cloud.qdrant.io/

TechnicalChinaUploaded on Dec 15, 2023
This is the homepage of a new book entitled "Mathematical Foundations of Reinforcement Learning."

TechnicalUnited StatesUploaded on Dec 15, 2023
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

TechnicalUploaded on Dec 15, 2023
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学

Objective(s)

Related lifecycle stage(s)

Collect & process dataPlan & design

TechnicalSwitzerlandUploaded on Dec 15, 2023
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)

TechnicalKoreaUploaded on Dec 15, 2023
Unofficial Implementation of RobustSTL: A Robust Seasonal-Trend Decomposition Algorithm for Long Time Series (AAAI 2019)

TechnicalGermanyUploaded on Dec 15, 2023
Remote Sensing for Movement Ecology


TechnicalUploaded on Dec 11, 2023
This paper, for the first time, systematically quantifies and finds speech recognition bias against gender, age, regional accents and non-native accents, and investigates the origin of this bias by investigating bias cross-lingually (i.e., Dutch and Mandarin) and for two different SotA ASR architectures (a hybrid DNN-HMM and an attention based end-to-end (E2E) model) through a phoneme error analysis.

Objective(s)

Related lifecycle stage(s)

Verify & validateCollect & process data

TechnicalEducationalProceduralUnited StatesUploaded on Jun 11, 2023
NB Defense is an open source JupyterLab Extension as well as a CLI tool for detecting security issues in Jupyter Notebooks. Issues ranging from detecting leaked PII in your notebook output to detecting vulnerable versions of installed dependencies.

TechnicalGermanyUploaded on Jun 8, 2023
Machine Learning in R

TechnicalUnited StatesUploaded on Mar 31, 2023
AI Vulnerability Database (AVID) is an open-source knowledge base of failure modes for AI models, datasets, and systems. The goals of AVID are to (1) Build out a functional taxonomy of AI harms across the coordinates of security, ethics, and performance (2) House full-fidelity information (metadata, harm metrics, measurements, benchmarks, and mitigation techniques if any) on harm evaluations (3) Evaluate systems, models, and datasets for specific harms and persist the structured results into a single source of truth.

TechnicalUploaded on Sep 20, 2022
Dopamine is a research framework for fast prototyping of reinforcement learning algorithms.

Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.