CUNI System for the WMT17 Multimodal Translation Task

Jindřich Helcl, Jindřich Libovický

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we describe our submissions to the WMT17 Multimodal Translation Task. For Task 1 (multimodal translation), our best scoring system is a purely textual neural translation of the source image caption to the target language. The main feature of the system is the use of additional data that was acquired by selecting similar sentences from parallel corpora and by data synthesis with back-translation. For Task 2 (cross-lingual image captioning), our best submitted system generates an English caption which is then translated by the best system used in Task 1. We also present negative results, which are based on ideas that we believe have potential of making improvements, but did not prove to be useful in our particular setup.
Original languageEnglish
Title of host publicationProceedings of the Second Conference on Machine Translation
Place of PublicationCopenhagen, Denmark
PublisherAssociation for Computational Linguistics
Pages450-457
Number of pages8
ISBN (Electronic)978-1-945626-96-8
DOIs
Publication statusPublished - 7 Sep 2017
EventSecond Conference on Machine Translation - Copenhagen, Denmark
Duration: 7 Sep 20178 Sep 2017
http://www.statmt.org/wmt17/

Conference

ConferenceSecond Conference on Machine Translation
Abbreviated titleWMT17
CountryDenmark
CityCopenhagen
Period7/09/178/09/17
Internet address

Cite this