Comparing Intrinsic and Extrinsic Evaluation of MT Output in a Dialogue System

Anne H. Schneider*, Ielka van der Sluis, Saturnino Luz

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

We present an exploratory study to assess machine translation output for application in a dialogue system using an intrinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.

Original languageEnglish
Pages329-336
Number of pages8
Publication statusPublished - 2 Dec 2010
Event7th International Workshop on Spoken Language Translation, IWSLT 2010 - Paris, France
Duration: 2 Dec 20103 Dec 2010

Conference

Conference7th International Workshop on Spoken Language Translation, IWSLT 2010
Country/TerritoryFrance
CityParis
Period2/12/103/12/10

Fingerprint

Dive into the research topics of 'Comparing Intrinsic and Extrinsic Evaluation of MT Output in a Dialogue System'. Together they form a unique fingerprint.

Cite this