Charles Explorer logo
🇬🇧

Automated Evaluation Metric for Terminology Consistency in MT

Publication at Faculty of Mathematics and Physics |
2022

Abstract

The most widely used metrics for machine translation tackle sentence-level evaluation. However, at least for professional domains such as legal texts, it is crucial to measure the consistency of the translation of the terms throughout the whole text.

This paper introduces an automated metric for the term consistency evaluation in machine translation (MT). To demonstrate the metric's performance, we used the Czech-to-English translated texts from the ELITR 2021 agreement corpus and the outputs of the MT systems that took part in WMT21 News Task.

We show different modes of our evaluation algorithm and try to interpret the differences in the ranking of the translation systems based on sentence-level metrics and our approach. We also demonstrate that the proposed metric scores significantly differ from the widespread automated metric scores, and correlate with the human assessment.