LEVERAGING CROSS-LINGUAL TRANSFER LEARNING FOR LOW-RESOURCE NATURAL LANGUAGE PROCESSING
Abstract
Abstract—The field of natural language processing (NLP) is growing quickly, yet, many languages are still under-represented because there is a dearth of labelled data. This study investigates the transfer of knowledge from resource-rich to low-resource languages through cross-lingual transfer learning as a way to overcome this difficulty. We test multilingual models like mBERT and XLM-R on tasks including machine translation, named entity recognition, and sentiment analysis. These models are refined utilising task-specific datasets from low-resource languages after being pre-trained on a variety of languages. Significant gains are demonstrated by the results, particularly in tasks with little labelled data and in languages that are closely linked to those used in the pre-training. These results demonstrate how multilingual models can help close the performance disparities between languages.Overall, this study offers useful information and shows how well cross-lingual transfer learning is in low- resource environments. These findings highlight the potential of multilingual models to reduce perfor- mance gaps across languages.Overall, this work demonstrates the effectiveness of cross-lingual transfer learning in low-resource situations and gives practical insights for designing inclusive NLP systems that better reflect global linguistic variety.
Downloads
Copyright (c) 2026 Boletim da Sociedade Paranaense de Matemática

This work is licensed under a Creative Commons Attribution 4.0 International License.
When the manuscript is accepted for publication, the authors agree automatically to transfer the copyright to the (SPM).
The journal utilize the Creative Common Attribution (CC-BY 4.0).



