Earlier research has shown that few studies in Natural Language Generation (NLG) evaluate their system outputs using an error analysis, despite known limitations of automatic evaluation metrics and human ratings. This position paper takes the stance that error analyses should be encouraged, and discusses several ways to do so.
This paper is not just based on our shared experience as authors, but we also distributed a survey as a means of public consultation. We provide an overview of existing barriers to carry out error analyses, and proposes changes to improve error reporting in the NLG literature.