Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation

Samuel Läubli, Rico Sennrich, Martin Volk

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent research suggests that neural machine translation achieves parity with professional human translation on the WMT Chinese–English news translation task. We empirically test this claim with alternative evaluation protocols, contrasting the evaluation of single sentences and entire documents. In a pairwise ranking experiment, human raters assessing adequacy and fluency show a stronger preference for human over machine translation when evaluating documents as compared to isolated sentences. Our findings emphasise the need to shift towards document-level evaluation as machine translation improves to the degree that errors which are hard or impossible to spot at the sentence-level become decisive in discriminating quality of different translation outputs.
Original languageEnglish
Title of host publicationProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Place of PublicationBrussels, Belgium
PublisherAssociation for Computational Linguistics
Pages4791-4796
Number of pages6
Publication statusPublished - Nov 2018
Event2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium
Duration: 31 Oct 20184 Nov 2018
http://emnlp2018.org/

Conference

Conference2018 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2018
CountryBelgium
CityBrussels
Period31/10/184/11/18
Internet address

Fingerprint Dive into the research topics of 'Has Machine Translation Achieved Human Parity? A Case for Document-level Evaluation'. Together they form a unique fingerprint.

Cite this