Abstract
This work addresses the need to aid Machine Translation (MT) development cycles with a complete workflow of MT evaluation methods. Our aim is to assess, compare and improve MT system variants. We hereby report on novel tools and practices that support various measures, developed in order to support a principled and informed approach of MT development. Our toolkit for automatic evaluation showcases quick and detailed comparison of MT system variants through automatic metrics and n-gram feedback, along with manual evaluation via edit-distance, error annotation and task-based feedback.
Original language | English |
---|---|
Title of host publication | Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) |
Place of Publication | Portoroz, Slovenia |
Publisher | European Language Resources Association (ELRA) |
Pages | 1877-1882 |
Number of pages | 6 |
ISBN (Print) | 978-2-9517408-9-1 |
Publication status | Published - 31 May 2016 |
Event | 10th International Conference on Language Resources and Evaluation, LREC 2016 - Portoroz, Slovenia Duration: 23 May 2016 → 28 May 2016 |
Conference
Conference | 10th International Conference on Language Resources and Evaluation, LREC 2016 |
---|---|
Country/Territory | Slovenia |
City | Portoroz |
Period | 23/05/16 → 28/05/16 |
Keywords
- evaluation methodologies
- machine translation
- tools, systems, applications