Linguistic Input Features Improve Neural Machine Translation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder–decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of speech tags, and syntactic dependency labels as input features to English↔German and English→Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An opensource implementation of our neural MT system is available , as are sample files and configurations.
Original languageEnglish
Title of host publicationProceedings of the First Conference on Machine Translation: Volume 1, Research Papers
Place of PublicationBerlin, Germany
PublisherAssociation for Computational Linguistics (ACL)
Pages83-91
Number of pages9
ISBN (Electronic)978-1-945626-10-4
DOIs
Publication statusPublished - 12 Aug 2016
EventFirst Conference on Machine Translation - Berlin, Germany
Duration: 11 Aug 201612 Aug 2016
http://www.statmt.org/wmt16/

Conference

ConferenceFirst Conference on Machine Translation
Abbreviated titleWMT16
Country/TerritoryGermany
CityBerlin
Period11/08/1612/08/16
Internet address

Fingerprint

Dive into the research topics of 'Linguistic Input Features Improve Neural Machine Translation'. Together they form a unique fingerprint.

Cite this