Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Safety

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Objective Safety

Uploaded on Apr 23, 2024
This Recommendation specifies use cases and requirements for multimedia communication enabled vehicle systems using artificial intelligence, including overview, use cases, high-layer architecture, service and network requirements, functional requirements, and non-functional requirements.

Objective(s)


TechnicalUnited KingdomUploaded on Apr 22, 2024
Tutorials, assignments, and competitions for MIT Deep Learning related courses.

TechnicalFinlandUploaded on Apr 22, 2024
Jupyter notebooks for teaching/learning Python 3

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalSpainUploaded on Apr 22, 2024
Classical equations and diagrams in machine learning

TechnicalGreeceUploaded on Apr 22, 2024
Python Audio Analysis Library: Feature Extraction, Classification, Segmentation and Applications

TechnicalUploaded on Apr 2, 2024
A Python package for identifying 42 kinds of animals, training custom models, and estimating distance from camera trap videos

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalUnited StatesUploaded on Apr 2, 2024
Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalUnited StatesUploaded on Apr 2, 2024
YoloV3 Implemented in Tensorflow 2.0

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalUploaded on Apr 2, 2024
The open big data serving engine. https://vespa.ai

TechnicalUnited StatesUploaded on Apr 2, 2024
NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite

TechnicalUploaded on Apr 2, 2024
trustworthy AI related projects

Objective(s)


TechnicalUploaded on Apr 2, 2024
Deep learning library featuring a higher-level API for TensorFlow.

TechnicalUnited StatesUploaded on Apr 2, 2024
Debugging, monitoring and visualization for Python Machine Learning and Data Science

Objective(s)

Related lifecycle stage(s)

Build & interpret modelPlan & design

TechnicalUploaded on Apr 2, 2024
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Objective(s)

Related lifecycle stage(s)

Build & interpret model

TechnicalUnited StatesUploaded on Apr 2, 2024
:satellite: Simple and ready-to-use tutorials for TensorFlow

Objective(s)

Related lifecycle stage(s)

Collect & process data

EducationalUploaded on Apr 2, 2024<1 hour
Approaches to disability-centered data, models, systems oversight

Objective(s)


ProceduralUploaded on Mar 26, 2024
These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.

Objective(s)


ProceduralSaudi ArabiaUploaded on Mar 26, 2024
LLM Survey for responsible and transparent and safe AI covering international compliance regulations and data/ model evaluations

Objective(s)


ProceduralUnited KingdomUploaded on Feb 20, 2024
This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).

Objective(s)

Related lifecycle stage(s)

Operate & monitorDeployPlan & design

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s Privacy Impact Risk Assessment System™ identifies the potential privacy impact risks which arise when using remote biometric AI systems including facial recognition technology. The system creates automated reports which recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonized standards, and good practice.

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.