Abstract / Description of output
Traditionally, success in multilingual machine translation can be attributed to three key factors in training data: large volume, diverse translation directions, and high quality. In the current practice of fine-tuning large language models (LLMs) for translation, we revisit the importance of these factors. We find that LLMs display strong translation capability after being fine-tuned on as few as 32 parallel sentences and that fine-tuning on a single translation direction enables translation in multiple directions. However, the choice of direction is critical: fine-tuning LLMs with only English on the target side can lead to task misinterpretation, which hinders translation into non-English languages. Problems also arise when noisy synthetic data is placed on the target side, especially when the target language is well-represented in LLM pre-training. Yet interestingly, synthesized data in an under-represented language has a less pronounced effect. Our findings suggest that when adapting LLMs to translation, the requirement on data quantity can be eased but careful considerations are still crucial to prevent an LLM from exploiting unintended data biases.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing |
Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
Publisher | Association for Computational Linguistics |
Pages | 388-409 |
Number of pages | 22 |
ISBN (Electronic) | 9798891761643 |
Publication status | Published - 16 Nov 2024 |
Event | 2024 Conference on Empirical Methods in Natural Language Processing - Hyatt Regency Miami Hotel, Miami, United States Duration: 12 Nov 2024 → 16 Nov 2024 https://2024.emnlp.org/ |
Conference
Conference | 2024 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP2024 |
Country/Territory | United States |
City | Miami |
Period | 12/11/24 → 16/11/24 |
Internet address |