Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Filters

SUBMIT A TOOL USE CASE

If you have a tool use case that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

SUBMIT
People who have used tools in the catalogue share their experiences and advice here. If you have used one of the tools please submit a use case to share your experience.

Shakers' AI Matchmaking System

Shakers' AI Matchmaking System

Shakers' AI matchmaking tool connects freelancers and projects by analyzing experiences, skills, and behaviours, ensuring precise personal and professional talent-client matches within its vast community.


Higher-dimensional bias in a BERT-based disinformation classifier

Higher-dimensional bias in a BERT-based disinformation classifier

Application of the bias detection tool on a self-trained BERT-based disinformation classifier on the Twitter1516 dataset


How You Match functionality in InfoJobs

How You Match functionality in InfoJobs

In Infojobs, the information available in the resumées of the candidates and in the posted job offers are used to compute a matching score between a job seeker and a given job offer. This matching score is called ‘How you Match’, and is currently being used in multiple user touchpoints in Infojobs, the leading job board in Spain.


Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance

Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance

Use case that makes use of continuous metalearning to identify, prioritise and investigate fraudulent insurance claims within the insurance industry.


British Standards Institution: EU AI Act Readiness Assessment and Algorithmic Auditing

British Standards Institution: EU AI Act Readiness Assessment and Algorithmic Auditing

AI providers need to ensure that their effort is correctly oriented to the full compliance with the EU AI Act. BSI is therefore meeting the needs of customers who will be regulated against the EU AI Act by offering readiness assessments and algorithm testing before the application of the regulation.


Nvidia: Explainable AI for credit risk management

Nvidia: Explainable AI for credit risk management

This case study focuses on the use of graphics processing units (GPUs) to accelerate SHAP explainable AI models for risk management, assessment and scoring of credit portfolios in traditional banks, as well as in fintech platforms for peer-to-peer (P2P) lending and crowdfunding.

Related tool: SHAP

Trilateral Research: Ethical impact assessment, risk assessment, transparency reporting, bias mitigation and co-design of AI used to safeguard children

Trilateral Research: Ethical impact assessment, risk assessment, transparency reporting, bias mitigation and co-design of AI used to safeguard children

This case study is focused on the use of an AI-enabled system called CESIUM to enhance decision making regarding safeguarding of children at risk for criminal and sexual exploitation.

Related tool: CESIUM

 Assessment for Responsible Artificial Intelligence.

Assessment for Responsible Artificial Intelligence.

During this six-month pilot, the practical application of a deep learning algorithm from the province of Fryslân has been investigated and assessed. 

Related tool: Z-Inspection

European AI Scanner: Ensuring Compliance with the EU Artificial Intelligence Act

European AI Scanner: Ensuring Compliance with the EU Artificial Intelligence Act

The European AI Scanner facilitates companies' compliance with the EU AI Act, ensuring trustworthy and responsible AI adoption in the European market.

Related tool: Orthrus

Using BigCode Open & Responsible AI License

Using BigCode Open & Responsible AI License

This case focuses on the use of the BigCode Open & Responsible AI license to share a large language model for code generation, StarCoder.


Use cases using the IBM Factsheets

Use cases using the IBM Factsheets

Several examples of use cases developed by IBM on how Factsheets can be built in practice.


Human resource management

Human resource management

A sample scenario in the context of human resource management illustrates the functioning of the Fairness Compass.

Related tool: Fairness Compass

Human-Robot Interaction Trust Scale (HRITS)

Human-Robot Interaction Trust Scale (HRITS)

Translation and validation of the Human-Computer Trust Scale to human-robot interaction (HRI) application to cobots.


Uncovering bugs in a health care model

Uncovering bugs in a health care model

The Google What-If tool helped a software developer spot errors in their model when assessing performance metrics.

Related tool: Google What-if Tool

Teaching codeless machine learning to auditors

Teaching codeless machine learning to auditors

Enablement for accountants illustrates that coding isn't always necessary to harness the value of machine learning.


How SAP promotes human agency through its AI policy

How SAP promotes human agency through its AI policy

SAP provides guidance to employees when building AI systems related to human agency and oversight.


Reporting Carbon Emissions on Open-Source Model Cards

Reporting Carbon Emissions on Open-Source Model Cards

Revealing the carbon emissions of a model helps normalize energy efficiency.

Related tool: Model Cards

Enterprise ChatGPT and LLM Governance

Enterprise ChatGPT and LLM Governance

AI governance is crucial as Generative AI systems like ChatGPT raises ethical concerns as businesses use them more extensively. 2021.AI's GRACE platform provides a solution to address these challenges.

Related tool: GRACE

CXPlain uncovers how certain factors impact housing prices in Boston

CXPlain uncovers how certain factors impact housing prices in Boston

The AI explainability method provides insight into a model that predicts median housing prices.

Related tool: Causal Explanations

FairLens detected racial bias in a recidivism prediction algorithm

FairLens detected racial bias in a recidivism prediction algorithm

FairLens assessed the bias of a dataset from an algorithm used to measure a convicted criminal’s likelihood of reoffending.

Related tool: FairLens

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.