Charles Explorer logo
🇨🇿

You Can Have Your Data and Balance It Too: Towards Balanced and Efficient Multilingual Models

Publikace na Matematicko-fyzikální fakulta |
2023

Tento text není v aktuálním jazyce dostupný. Zobrazuje se verze "en".Abstrakt

Multilingual models have been widely used for the cross-lingual transfer to low-resource languages. However, the performance on these languages is hindered by their under-representation in the pretraining data.

To alleviate this problem, we propose a novel multilingual training technique based on teacher-student knowledge distillation. In this setting, we utilize monolingual teacher models optimized for their language.

We use those teachers along with balanced (sub-sampled) data to distill the teachers' knowledge into a single multilingual student. Our method outperforms standard training methods in low-resource languages and retains performance on high-resource languages while using the same amount of data.

If applied widely, our approach can increase the representation of low-resource languages in NLP systems.