Projects per year
Abstract
We present a doubly-attentive multimodal machine translation model. Our model learns to attend to source language and spatial-preserving CONV5,4 visual features as separate attention mechanisms in a neural translation model. In image description translation experiments (Task 1), we find an improvement of 2.3 Meteor points compared to initialising the hidden state of the decoder with only the FC7 features and 2.9 Meteor points compared to a text-only neural machine translation baseline, confirming the useful nature of attending to the CONV5,4 features.
Original language | English |
---|---|
Title of host publication | Proceedings of the First Conference on Machine Translation, WMT 2016, colocated with ACL 2016, August 11-12, Berlin, Germany |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 634-638 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 12 Aug 2016 |
Event | First Conference on Machine Translation - Berlin, Germany Duration: 11 Aug 2016 → 12 Aug 2016 http://www.statmt.org/wmt16/ |
Conference
Conference | First Conference on Machine Translation |
---|---|
Abbreviated title | WMT16 |
Country/Territory | Germany |
City | Berlin |
Period | 11/08/16 → 12/08/16 |
Internet address |
Fingerprint
Dive into the research topics of 'DCU-UvA Multimodal MT System Report'. Together they form a unique fingerprint.Projects
- 1 Finished