The aim of the workshop is to offer a platform for discussions on the status and the future of the evaluation of Natural Language Generation (NLG) systems. The workshop invited archival papers and abstracts on NLG evaluation including best practices of human evaluation, qualitative studies, cognitive bias in human evaluations etc.
The workshop received twelve submissions. Ten papers and abstracts were accepted and were presented as posters at the workshop.
This proceedings volume contains the five archival papers.