Using MT-ComparEval

Roman Sudarikov, Martin Popel, Ondřej Bojar, Aljoscha Burchardt, Ondrej Klejch

Research output: Contribution to conferencePaperpeer-review

Abstract

The paper showcases the MT-ComparEval tool for qualitative evaluation of machine translation (MT). MT-ComparEval is an open-source tool that has been designed in order to help MT developers by providing a graphical user interface that allows the comparisonand evaluation of different MT engines/experiments and settings. The tool implements several measures that represent the current best practice of automatic evaluation. It also provides guidance in the targeted inspection of examples that show a certain behavior in terms of n-gram similarity/dissimilarity with alternative translations or the reference translation. In this paper, we provide an applied, “hands-on” perspective on the actual usage of MT-ComparEval. In a case study, we use it to compare and analyze several systems submitted to the WMT 2015 shared task.
Original languageEnglish
Pages76-82
Number of pages7
Publication statusPublished - 24 May 2016
EventLREC 2016 Workshop “Translation Evaluation – From Fragmented Tools and Data Sets to an Integrated Ecosystem” - Portorož, Slovenia
Duration: 24 May 201624 May 2016
http://www.lrec-conf.org/proceedings/lrec2016/workshops.html

Workshop

WorkshopLREC 2016 Workshop “Translation Evaluation – From Fragmented Tools and Data Sets to an Integrated Ecosystem”
Country/TerritorySlovenia
CityPortorož
Period24/05/1624/05/16
Internet address

Keywords / Materials (for Non-textual outputs)

  • Machine Translation Evaluation
  • Analysis of MT Output

Fingerprint

Dive into the research topics of 'Using MT-ComparEval'. Together they form a unique fingerprint.

Cite this