Charles Explorer logo
🇬🇧

From COMET to COMES - Can Summary Evaluation Benefit from Translation Evaluation?

Publication at Faculty of Mathematics and Physics |
2022

Abstract

Comet is a recently proposed trainable neural-based evaluation metric developed to assess the quality of Machine Translation systems. In this paper, we explore the usage of Comet for evaluating Text Summarization systems -- despite being trained on multilingual MT outputs, it performs remarkably well in monolingual settings, when predicting summarization output quality.

We introduce a variant of the model -- Comes -- trained on the annotated summarization outputs that uses MT data for pre-training. We examine its performance on several datasets with human judgments collected for different notions of summary quality, covering several domains and languages.