Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system

Myroslava O. Dzikovska, Peter Bell, Amy Isard, Johanna D. Moore

Research output: Chapter in Book/Report/Conference proceedingConference contribution


It is not always clear how the differences in intrinsic evaluation metrics for a parser or classifier will affect the performance of the system that uses it. We investigate the relationship between the intrinsic evaluation scores of an interpretation component in a tutorial dialogue system and the learning outcomes in an experiment with human users. Following the PARADISE methodology, we use multiple linear regression to build predictive models of learning gain, an important objective outcome metric in tutorial dialogue. We show that standard intrinsic metrics such as F-score alone do not predict the outcomes well. However, we can build predictive performance func-tions that account for up to 50% of the variance in learning gain by combining features based on standard evaluation scores and on the confusion matrix entries. We argue that building such predictive models can help us better evaluate performance of NLP components that cannot be distinguished based on F-score alone, and illustrate our approach by comparing the current interpretation component in the system to a new classifier trained on the evaluation data.
Original languageEnglish
Title of host publicationProceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Place of PublicationAvignon, France
PublisherAssociation for Computational Linguistics
Number of pages11
ISBN (Print)978-1-937284-19-0
Publication statusPublished - 1 Apr 2012


Dive into the research topics of 'Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system'. Together they form a unique fingerprint.

Cite this