Fine-tuning large language models to translate: Will a touch of noisy data in misaligned languages suffice?

Dawei Zhu, Pinzhen Chen, Miaoran Zhang, Barry Haddow, Xiaoyu Shen, Dietrich Klakow

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Traditionally, success in multilingual machine translation can be attributed to three key factors in training data: large volume, diverse translation directions, and high quality. In the current practice of fine-tuning large language models (LLMs) for translation, we revisit the importance of these factors. We find that LLMs display strong translation capability after being fine-tuned on as few as 32 parallel sentences and that fine-tuning on a single translation direction enables translation in multiple directions. However, the choice of direction is critical: fine-tuning LLMs with only English on the target side can lead to task misinterpretation, which hinders translation into non-English languages. Problems also arise when noisy synthetic data is placed on the target side, especially when the target language is well-represented in LLM pre-training. Yet interestingly, synthesized data in an under-represented language has a less pronounced effect. Our findings suggest that when adapting LLMs to translation, the requirement on data quantity can be eased but careful considerations are still crucial to prevent an LLM from exploiting unintended data biases.
Original languageEnglish
Title of host publicationProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
EditorsYaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
PublisherAssociation for Computational Linguistics
Pages388-409
Number of pages22
ISBN (Electronic)9798891761643
Publication statusPublished - 16 Nov 2024
Event2024 Conference on Empirical Methods in Natural Language Processing - Hyatt Regency Miami Hotel, Miami, United States
Duration: 12 Nov 202416 Nov 2024
https://2024.emnlp.org/

Conference

Conference2024 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP2024
Country/TerritoryUnited States
CityMiami
Period12/11/2416/11/24
Internet address

Fingerprint

Dive into the research topics of 'Fine-tuning large language models to translate: Will a touch of noisy data in misaligned languages suffice?'. Together they form a unique fingerprint.

Cite this