Abstract / Description of output
This paper presents the results of the WMT10 and MetricsMATR10 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon's Mechanical Turk.
Original language | English |
---|---|
Title of host publication | Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR |
Place of Publication | Stroudsburg, PA, USA |
Publisher | Association for Computational Linguistics |
Pages | 17-53 |
Number of pages | 37 |
Publication status | Published - 2010 |