In this paper, we show that the performance measures Pk and Window Diff, commonly used for discourse, topic, and story segmentation evaluation, are biased in favor of segmentations with fewer or adjacent segment boundaries. By analytical and empirical means, we show how this results in a failure to penalize substantially defective segmentations. Our novel unbiased measure k-? corrects this, providing a single score that accounts for chance agreement. We also propose additional statistics that may be used to characterize important properties of segmentations such as boundary clumping. We go on to replicate a recent spoken-language topic segmentation experiment, drawing conclusions that are substantially different from previous studies concerning the effectiveness of state-of-the-art topic segmentation algorithms.
|Title of host publication||Spoken Language Technology Workshop (SLT), 2010 IEEE|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Number of pages||6|
|Publication status||Published - Dec 2010|
- natural language processing
- Window Diff
- boundary clumping
- spoken-language topic segmentation
- story segmentation evaluation
- unbiased discourse segmentation evaluation
- unbiased measure k-?
- agreement measures
- discourse analysis
- spoken conversation
- topic segmentation