Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation

Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, Omar F. Zaidan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk.
Original languageEnglish
Title of host publicationProceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR
Place of PublicationStroudsburg, PA, USA
PublisherAssociation for Computational Linguistics
Pages17-53
Number of pages37
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Findings of the 2010 Joint Workshop on Statistical Machine Translation and Metrics for Machine Translation'. Together they form a unique fingerprint.

Cite this