Unbiased discourse segmentation evaluation

J. Niekrasz, J.D. Moore

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper, we show that the performance measures Pk and Window Diff, commonly used for discourse, topic, and story segmentation evaluation, are biased in favor of segmentations with fewer or adjacent segment boundaries. By analytical and empirical means, we show how this results in a failure to penalize substantially defective segmentations. Our novel unbiased measure k-? corrects this, providing a single score that accounts for chance agreement. We also propose additional statistics that may be used to characterize important properties of segmentations such as boundary clumping. We go on to replicate a recent spoken-language topic segmentation experiment, drawing conclusions that are substantially different from previous studies concerning the effectiveness of state-of-the-art topic segmentation algorithms.
Original languageEnglish
Title of host publicationSpoken Language Technology Workshop (SLT), 2010 IEEE
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages6
ISBN (Electronic)978-1-4244-7902-3
ISBN (Print)978-1-4244-7904-7
Publication statusPublished - Dec 2010

Keywords / Materials (for Non-textual outputs)

  • natural language processing
  • Pk
  • Window Diff
  • boundary clumping
  • spoken-language topic segmentation
  • story segmentation evaluation
  • unbiased discourse segmentation evaluation
  • unbiased measure k-?
  • agreement measures
  • discourse analysis
  • evaluation
  • spoken conversation
  • topic segmentation


Dive into the research topics of 'Unbiased discourse segmentation evaluation'. Together they form a unique fingerprint.

Cite this