Tools and Guidelines for Principled Machine Translation Development

Nora Aranberri, Eleftherios Avramidis, Aljoscha Burchardt, Ondrej Klejch, Martin Popel, Maja Popovi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This work addresses the need to aid Machine Translation (MT) development cycles with a complete workflow of MT evaluation methods. Our aim is to assess, compare and improve MT system variants. We hereby report on novel tools and practices that support various measures, developed in order to support a principled and informed approach of MT development. Our toolkit for automatic evaluation showcases quick and detailed comparison of MT system variants through automatic metrics and n-gram feedback, along with manual evaluation via edit-distance, error annotation and task-based feedback.
Original languageEnglish
Title of host publicationProceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Place of PublicationPortoroz, Slovenia
PublisherEuropean Language Resources Association (ELRA)
Pages1877-1882
Number of pages6
ISBN (Print)978-2-9517408-9-1
Publication statusPublished - 31 May 2016
Event10th International Conference on Language Resources and Evaluation, LREC 2016 - Portoroz, Slovenia
Duration: 23 May 201628 May 2016

Conference

Conference10th International Conference on Language Resources and Evaluation, LREC 2016
CountrySlovenia
CityPortoroz
Period23/05/1628/05/16

Keywords

  • evaluation methodologies
  • machine translation
  • tools, systems, applications

Fingerprint Dive into the research topics of 'Tools and Guidelines for Principled Machine Translation Development'. Together they form a unique fingerprint.

Cite this