Input Combination Strategies for Multi-Source Transformer Decoder

Jindřich Libovický, Jindřich Helcl, David Mareček

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoder-decoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.
Original languageEnglish
Title of host publicationProceedings of the Third Conference on Machine Translation: Research Papers
Place of PublicationBrussels, Belgium
PublisherAssociation for Computational Linguistics
Pages253-260
Number of pages8
ISBN (Electronic)978-1-948087-81-0
DOIs
Publication statusPublished - 31 Oct 2018
EventEMNLP 2018 Third Conference on Machine Translation (WMT18) - Brussels, Belgium
Duration: 31 Oct 20181 Nov 2018
http://www.statmt.org/wmt18/

Workshop

WorkshopEMNLP 2018 Third Conference on Machine Translation (WMT18)
Abbreviated titleWMT18
CountryBelgium
CityBrussels
Period31/10/181/11/18
Internet address

Fingerprint Dive into the research topics of 'Input Combination Strategies for Multi-Source Transformer Decoder'. Together they form a unique fingerprint.

Cite this