Monolingual Adapters for Zero-Shot Neural Machine Translation

Jerin Philip, Alexandre Berard, Matthias Gallé, Laurent Besacier

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We propose a novel adapter layer formalism for adapting multilingual models. They are more parameter-efficient than existing adapter layers while obtaining as good or better performance. The layers are specific to one language (as opposed to bilingual adapters) allowing to compose them and generalize to unseen language-pairs. In this zero-shot setting, they obtain a median improvement of +2.77 BLEU points over a strong 20-language multilingual Transformer baseline trained on TED talks.
Original languageEnglish
Title of host publicationProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages4465-4470
Number of pages6
ISBN (Electronic)978-1-952148-60-6
DOIs
Publication statusPublished - 16 Nov 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Online
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Monolingual Adapters for Zero-Shot Neural Machine Translation'. Together they form a unique fingerprint.

Cite this