Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Fairness

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Target sector(s) Finance and insurance
Approach Procedural

TechnicalPolandUploaded on 15 déc. 2023
A curated list of resources for Document Understanding (DU) topic

TechnicalChinaUploaded on 15 déc. 2023
This is the homepage of a new book entitled "Mathematical Foundations of Reinforcement Learning."

TechnicalUnited StatesUploaded on 15 déc. 2023
A set of Deep Reinforcement Learning Agents implemented in Tensorflow.

TechnicalArmeniaUploaded on 15 déc. 2023
A new one shot face swap approach for image and video domains.

TechnicalFranceUploaded on 15 déc. 2023
Efficient, scalable and enterprise-grade CPU/GPU inference server for Hugging Face transformer models.

Objective(s)

Related lifecycle stage(s)

Verify & validateBuild & interpret model

TechnicalUploaded on 11 déc. 2023
Gluon CV Toolkit

TechnicalUploaded on 11 déc. 2023
In this work, we proposed a two-stage framework, which produces debiased representations by using a fairness constraint adversarial framework in the first stage.

Objective(s)


TechnicalUploaded on 11 déc. 2023
This paper, for the first time, systematically quantifies and finds speech recognition bias against gender, age, regional accents and non-native accents, and investigates the origin of this bias by investigating bias cross-lingually (i.e., Dutch and Mandarin) and for two different SotA ASR architectures (a hybrid DNN-HMM and an attention based end-to-end (E2E) model) through a phoneme error analysis.

Objective(s)

Related lifecycle stage(s)

Verify & validateCollect & process data

TechnicalUploaded on 11 déc. 2023
Practical recommendations for mitigating bias in automated speaker recognition, and outline future research directions.

Objective(s)


TechnicalUnited StatesIndiaUploaded on 11 déc. 2023<6 months
Repository of algorithmic bias related metrics and measures to enable researchers and practitioners to leverage their use. The repository is also intended to allow researchers to expand their research to identify more metrics that may be relevant and appropriate to specific context.

Objective(s)

Related lifecycle stage(s)

Verify & validate

TechnicalNetherlandsUploaded on 29 nov. 2023
This bias detection tool identifies potentially unfairly treated groups of similar users by a binary algorithmic classifier. The tool identifies clusters of users that face a higher misclassification rate compared to the rest of the data set. Clustering is an unsupervised ML method, so no data is needed is required on protected attributes of users. The metric by which bias is defined can be manually chosen in advance: False Negative Rate (FNR), False Positive Rate (FPR), or Accuracy (Acc).

Objective(s)

Related lifecycle stage(s)

Verify & validate

TechnicalUnited KingdomJapanEuropeUploaded on 10 nov. 2023<1 hour
Intersectional Fairness (ISF) is a bias detection and mitigation technology developed by Fujitsu for intersectional bias, which is caused by the combinations of multiple protected attributes. ISF is hosted as an open source project by the Linux Foundation.

TechnicalUploaded on 13 oct. 2023
Zero-trust 360 RAIAAS platform for audit and certification of enterprise-wide AI solutions using trust indicators following ethical standards

Objective(s)


TechnicalUnited StatesUploaded on 2 oct. 2023
Automate, simplify, and streamline your end-to-end AI risk management process.

TechnicalGermanyUploaded on 7 sept. 2023
Biaslyze is a python package that helps to get started with the analysis of bias within NLP models and offers a concrete entry point for further impact assessments and mitigation measures.

Objective(s)

Related lifecycle stage(s)

Verify & validateBuild & interpret model

TechnicalUploaded on 28 août 2023>1 year
PAM is the contextual bias detection solution for AI and ML models. Achieve trustworthiness by identifying hidden bias prior to launching and improving explainability. Get a socio-technical bias analysis within minutes to validate your solution pre-market and ensure compliance with AI regulations.

TechnicalUploaded on 26 juin 2023
Multi-VALUE is a suite of resources for evaluating and achieving English dialect invariance.

Objective(s)


TechnicalGermanyUploaded on 22 juin 2023<1 day
QuantPi’s platform unites AI testing with AI governance. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

TechnicalUploaded on 21 juin 2023
Use a model inventory and AI Factsheets as part of your AI Governance strategy to track the lifecycles of machine learning models from training to production. View factsheets for model assets that track lineage events and facilitate efficient ModelOps governance.

TechnicalBelgiumUploaded on 16 juin 2023
Justifai is an AI platform enabling business users to build trustworthy AI solutions fast, cost-effective and with minimal compliance risks

Objective(s)


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.

Inscrivez-vous pour recevoir des alertes de notre blog, le AI Wonk: