These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Translation Edit Rate (TER), also called Translation Error Rate, is a metric to quantify the edit operations that a hypothesis requires to match a reference translation.
Trustworthy AI Relevance
This metric addresses Transparency, Human Agency & Control by quantifying relevant system properties. TER supports Transparency by providing a quantifiable measure of translation quality, which can help users and developers understand and communicate the reliability of AI-generated translations. This contributes to openness about system performance.
Related use cases :
Rethink about the Word-level Quality Estimation for Machine Translation from Human Judgement
Uploaded on Nov 1, 2022Word-level Quality Estimation (QE) of Machine Translation (MT) aims to find out potential translation errors in the translated sentence without reference. Typically, convention...
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:



























