These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

### Scope

### SUBMIT A METRIC

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT## Accuracy 155 related use cases

Accuracy is the proportion of correct predictions among the total number of cases processed. It can be computed with:

Accuracy = (TP + TN) / (TP + TN + FP + FN) , where:

TP: True positive

TN: True negative

FP: False positive

FN...

Objectives:

## Mean Intersection over Union (IoU) 35 related use cases

Mean Intersection over Union (IoU) is the area of overlap between the predicted segmentation and the ground truth divided by the area of union between the predicted segmentation and the ground truth.

For binary (two classes) or multi-class segmentatio...

Objectives:

## Mahalanobis Distance 26 related use cases

Mahalonobis distance is the distance between a point and a distribution (as opposed to the distance between two points), making it the multivariate equivalent of the Euclidean distance.

It is often used in multivariate anomaly detection, classificatio...

Objectives:

## Anonymity Set Size 25 related use cases

The anonymity set for an individual *u*, denoted *ASu* is the set of users that the adversary cannot distinguish from u. It can be seen as the size of the crowd into which the target u can blend.

*privASS ≡ |ASu |*

Objectives:

## Equal performance 19 related use cases

If a model systematically makes errors disproportionately for patients in the protected group, it is likely to lead to unequal outcomes. *Equal performance* refers to the assurance that a model is equally accurate for patients in the protec...

Objectives:

## Receiver Operating Characteristic Curve (ROC) and Area Under the Curve (AUC) 16 related use cases

This metric computes the area under the curve (AUC) for the Receiver Operating Characteristic Curve (ROC). The return values represent how well the model used is predicting the correct classes, based on the input data. A score of 0.5 means that the model is...

Objectives:

## Bilingual Evaluation Understudy (BLEU) 15 related use cases

Bilingual Evaluation Understudy (BLEU) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine’s output and that of a human: ...

Objectives:

## Time until Adversary’s Success 12 related use cases

The most general time-based metric measures the time until the adversary’s success. It assumes that the adversary will succeed eventually, and is therefore an example of a pessimistic metric. This metric relies on a definition of success, and varies depend...

Objectives:

## Amount of Leaked Information 12 related use cases

This metric counts the information items S disclosed by a system, e.g., the number of compromised users. However, this metric does not indicate the severity of a leak because it does not account for the

sensitivity of the leaked information.

<...

Objectives:

## Precision 11 related use cases

Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation: Precision = TP / (TP + FP) where TP is the True positives (i.e. the examples correctly labeled as pos...

Objectives:

## Recall-Oriented Understudy for Gisting Evaluation (ROUGE) 10 related use cases

ROUGE, or Recall-Oriented Understudy for Gisting Evaluation, is a set of metrics and a software package used for evaluating automatic summarization and machine translation software in natural language processing. The metrics compare an automatically produce...

Objectives:

## Recall 9 related use cases

Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation: Recall = TP / (TP + FN) Where TP is the number of true positives and FN is the number of false negatives.

Objectives:

## Gender-based Illicit Proximity Estimate (GIPE) 3 related use cases

This paper proposes a new bias evaluation metric – Gender-based Illicit Proximity Estimate (GIPE), which measures the extent of undue proximity in word vectors resulting from the presence of gender-based predilections. Experiments based on a suite of...

Objectives:

## Consensus-based Image Description Evaluation (CIDEr) 3 related use cases

The CIDEr (Consensus-based Image Description Evaluation) metric is a way of evaluating the quality of generated textual descriptions of images. The CIDEr metric measures the similarity between a generated caption and the reference captions, and it is based ...

Objectives:

## Word Error Rate (WER) 3 related use cases

Word Error Rate (WER) is a common metric of the performance of an automatic speech recognition (ASR) system.

The general difficulty of measuring the performance of ASR systems lies in the fact that the recognized word sequence can have a different len...

Objectives:

## SacreBLEU 2 related use cases

SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich’s multi-bleu-detok.perl, it produces the official Workshop on Machine Translation (WMT) scores but works with plain text. It also kn...

Objectives:

## Perplexity 2 related use cases

Given a model and an input text sequence, perplexity measures how likely the model is to generate the input text sequence. This can be used in two main ways:

- to evaluate how well the model has learned the distribution of the text it was traine...

Objectives:

## Exact Match 2 related use cases

A given predicted string’s exact match score is 1 if it is the exact same as its reference string, and is 0 otherwise.

**Example 1**: The exact match score of prediction “Happy Birthday!” is 0, given its reference is “Happy New Year!...

Objectives:

## Adjusted Rand Index (ARI) 2 related use cases

The Adjusted Rand Index (ARI) is a measure of the similarity between two data clusterings. It is a correction of the Rand Index, which is a basic measure of similarity between two clusterings, but it has the disadvantage of being sensitive to chance. The Ad...

Objectives:

## F-score 2 related use cases

In statistical analysis of binary classification, the F-score or F-measure is a measure of a test's accuracy. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all...

Objectives: