Charles Explorer logo
馃嚞馃嚙

HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language

Publication at Faculty of Mathematics and Physics |
2023

Abstract

This paper presents HaVQA, the first multimodal dataset for visual question-answering (VQA) tasks in the Hausa language. The dataset was created by manually translating 6,022 English question-answer pairs, which are associated with 1,555 unique images from the Visual Genome dataset.

As a result, the dataset provides 12,044 gold standard English- Hausa parallel sentences that were translated in a fashion that guarantees their semantic match with the corresponding visual information. We conducted several baseline experiments on the dataset, including visual question answering, visual question elicitation, text-only and multimodal machine translation.