Charles Explorer logo
🇨🇿

Adding Visual Information to Improve Multimodal Machine Translation for Low-Resource Language

Publikace

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

Machine translation makes it easy for people to communicate across languages. Multimodal machine translation is also one of the important directions of research in machine translation, which uses feature information such as images and audio to assist translation models in obtaining higher quality target languages.

However, in the vast majority of current research work has been conducted on the basis of commonly used corpora such as English, French, German, less research has been done on low-resource languages, and this has left the translation of low-resource languages relatively behind. This paper selects the English-Hindi and English-Hausa corpus, researched on low-resource language translation.

The different models we use for image feature information extraction are fusion of image features with text information in the text encoding process of translation, using image features to provide additional information, and assisting the translation model for translation. Compared with text-only machine translation, the experimental results show that our method improves 3 BLEU in the English-Hindi dataset and improves 0.47 BLEU in the English-Hausa dataset.

In addition, we also analyze the effect of image feature information extracted by different feature extraction models on the translation results. Different models pay different attention to each region of the image, and ResNet model is able to extract more feature information compared to VGG model, which is more effective for translation.