Language Model Prior for Low-Resource Neural Machine Translation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The scarcity of large parallel corpora is an important obstacle for neural machine translation. A common solution is to exploit the knowledge of language models (LM) trained on abundant monolingual data. In this work, we propose a novel approach to incorporate a LM as prior in a neural translation model (TM). Specifically, we add a regularization term, which pushes the output distributions of the TM to be probable under the LM prior, while avoiding wrong predictions when the TM “disagrees” with the LM. This objective relates to knowledge distillation, where the LM can be viewed as teaching the TM about the target language. The proposed approach does not compromise decoding speed, because the LM is used only at training time, unlike previous work that requires it during inference. We present an analysis of the effects that different methods have on the distributions of the TM. Results on two low-resource machine translation datasets show clear improvements even with limited monolingual data.
Original languageEnglish
Title of host publicationProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PublisherAssociation for Computational Linguistics (ACL)
Pages7622–7634
Number of pages13
ISBN (Print)978-1-952148-60-6
DOIs
Publication statusPublished - 20 Nov 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Online
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Language Model Prior for Low-Resource Neural Machine Translation'. Together they form a unique fingerprint.

Cite this