Abstract
The paper showcases the MT-ComparEval tool for qualitative evaluation of machine translation (MT). MT-ComparEval is an open-source tool that has been designed in order to help MT developers by providing a graphical user interface that allows the comparisonand evaluation of different MT engines/experiments and settings. The tool implements several measures that represent the current best practice of automatic evaluation. It also provides guidance in the targeted inspection of examples that show a certain behavior in terms of n-gram similarity/dissimilarity with alternative translations or the reference translation. In this paper, we provide an applied, “hands-on” perspective on the actual usage of MT-ComparEval. In a case study, we use it to compare and analyze several systems submitted to the WMT 2015 shared task.
Original language | English |
---|---|
Pages | 76-82 |
Number of pages | 7 |
Publication status | Published - 24 May 2016 |
Event | LREC 2016 Workshop “Translation Evaluation – From Fragmented Tools and Data Sets to an Integrated Ecosystem” - Portorož, Slovenia Duration: 24 May 2016 → 24 May 2016 http://www.lrec-conf.org/proceedings/lrec2016/workshops.html |
Workshop
Workshop | LREC 2016 Workshop “Translation Evaluation – From Fragmented Tools and Data Sets to an Integrated Ecosystem” |
---|---|
Country/Territory | Slovenia |
City | Portorož |
Period | 24/05/16 → 24/05/16 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- Machine Translation Evaluation
- Analysis of MT Output