On Romanization for Model Transfer Between Scripts in Neural Machine Translation

Chantal Amrhein, Rico Sennrich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Transfer learning is a popular strategy to improve the quality of low-resource machine translation. For an optimal transfer of the embedding layer, the child and parent model should share a substantial part of the vocabulary. This is not the case when transferring to languages with a different script. We explore the benefit of romanization in this scenario. Our results show that romanization entails information loss and is thus not always superior to simpler vocabulary transfer methods, but can improve the transfer between related languages with different scripts. We compare two romanization tools and find that they exhibit different degrees of information loss, which affects translation quality. Finally, we extend romanization to the target side, showing that this can be a successful strategy when coupled with a simple deromanization model.
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2020
PublisherAssociation for Computational Linguistics
Pages2461–2469
Number of pages9
ISBN (Print)978-1-952148-90-3
DOIs
Publication statusPublished - 16 Nov 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Virtual conference
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
CityVirtual conference
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'On Romanization for Model Transfer Between Scripts in Neural Machine Translation'. Together they form a unique fingerprint.

Cite this