As long-form spoken documents become more ubiquitous in everyday life, so does the need for automatic discourse segmentation in spoken language processing tasks. Although previous work has focused on broad topic segmentation, detection of finer-grained discourse units, such as paragraphs, is highly desirable for presenting and analyzing spoken content. To better understand how different aspects of speech cue these subtle discourse transitions, we investigate automatic paragraph segmentation of TED talks. We build lexical and prosodic paragraph segmenters using Support Vector Machines, AdaBoost, and Long Short Term Memory (LSTM) recurrent neural networks. In general, we find that induced cue words and supra-sentential prosodic features outperform features based on topical coherence, syntactic form and complexity. However, our best performance is achieved by combining a wide range of individually weak lexical and prosodic features, with the sequence modelling LSTM generally outperforming the other classifiers by a large margin. Moreover, we find that models that allow lower level interactions between different feature types produce better results than treating lexical and prosodic contributions as separate, independent information sources.
|Title of host publication||Interspeech 2016|
|Place of Publication||San Francisco, United States|
|Number of pages||5|
|Publication status||Published - 12 Sep 2016|
|Event||Interspeech 2016 - San Francisco, United States|
Duration: 8 Sep 2016 → 12 Sep 2016
|Publisher||International Speech Communication Association|
|Period||8/09/16 → 12/09/16|
FingerprintDive into the research topics of 'Automatic Paragraph Segmentation with Lexical and Prosodic Features'. Together they form a unique fingerprint.
- School of Philosophy, Psychology and Language Sciences - Lecturer in Speech and Language Processing
- Institute of Language, Cognition and Computation
- Centre for Speech Technology Research
Person: Academic: Research Active