Charles Explorer logo
🇬🇧

Evaluating the state-of-the-art of End-to-End Natural Language Generation: The E2E NLG challenge

Publication at Faculty of Mathematics and Physics |
2020

Abstract

This paper provides a comprehensive analysis of the first shared task on End-to-End Natural Language Generation (NLG) and identifies avenues for future research based on the results. This shared task aimed to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena.

Introducing novel automatic and human metrics, we compare 62 systems submitted by 17 institutions, covering a wide range of approaches, including machine learning architectures - with the majority implementing sequence-to-sequence models (seq2seq) - as well as systems based on grammatical rules and templates. Seq2seq-based systems have demonstrated a great potential for NLG in the challenge.

We find that seq2seq systems generally score high in terms of word-overlap metrics and human evaluations of naturalness - with the winning Slug system (Juraska et al., 2018) being seq2seq-based. However, vanilla seq2se