These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
CER supports Safety by reducing the likelihood of harmful or misleading outputs due to transcription errors, which is especially important in domains like healthcare or legal transcription. It also supports Robustness by providing a measurable indicator of the system's reliability in producing accurate outputs under standard conditions. However, CER does not address the full scope of these objectives, as it does not account for performance under adversarial conditions or broader safety controls.The Character Error Rate (CER) compares, for a given page, the total number of characters (n), including spaces, to the minimum number of insertions (i), substitutions (s) and deletions (d) of characters that are required to obtain the Ground Truth result. The formula to calculate CER is as follows: CER = [ (i + s + d) / n ]*100
Related use cases :
Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning
Uploaded on Nov 1, 2022In this paper, we report a large-scale end-to-end language-independent multilingual model for joint automatic speech recognition (ASR) and language identification (LID). This m...
About the metric
You can click on the links to see the associated metrics
Objective(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):
