Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

 

 

CER supports Safety by reducing the likelihood of harmful or misleading outputs due to transcription errors, which is especially important in domains like healthcare or legal transcription. It also supports Robustness by providing a measurable indicator of the system's reliability in producing accurate outputs under standard conditions. However, CER does not address the full scope of these objectives, as it does not account for performance under adversarial conditions or broader safety controls.The Character Error Rate (CER) compares, for a given page, the total number of characters (n), including spaces, to the minimum number of insertions (i), substitutions (s) and deletions (d) of characters that are required to obtain the Ground Truth result. The formula to calculate CER is as follows: CER = [ (i + s + d) / n ]*100

Related use cases :

Uploaded on Nov 1, 2022

In this paper, we report a large-scale end-to-end language-independent multilingual model for joint automatic speech recognition (ASR) and language identification (LID). This m...



About the metric



Lifecycle stage(s):


Target users:


Risk management stage(s):

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.