Abstract / Description of output
Spoken documents, such as podcasts or lectures, are a growing presence in everyday life. Being able to automatically identify their discourse structure is an important step to understanding what a spoken document is about. Moreover, finer-grained units, such as paragraphs, are highly desirable for presenting and analyzing spoken content. However, little work has been done on discourse based speech segmentation below the level of broad topics. In order to examine how discourse transitions are cued in speech, we investigate automatic paragraph segmentation of TED talks using lexical and prosodic features. Experiments using Support Vector Machines, AdaBoost, and Neural Networks show that models using supra-sentential prosodic features and induced cue words perform better than those based on the type of lexical cohesion measures often used in broad topic segmentation. Moreover, combining a wide range of individually weak lexical and prosodic predictors improves performance, and modelling contextual information using recurrent neural networks outperforms other approaches by a large margin. Our best results come from using late fusion methods that integrate representations generated by separate lexical and prosodic models while allowing interactions between these features streams rather than treating them as independent information sources. Application to ASR outputs shows that adding prosodic features, particularly using late fusion, can significantly ameliorate decreases in performance due to transcription errors.
Original language | English |
---|---|
Pages (from-to) | 44-57 |
Journal | Speech Communication |
Volume | 121 |
Early online date | 11 May 2020 |
DOIs | |
Publication status | Published - Aug 2020 |
Keywords / Materials (for Non-textual outputs)
- discourse structure
- paragraph segmentation
- prosody
- spoken language understanding
- coherence
- speech processing
Fingerprint
Dive into the research topics of 'Integrating lexical and prosodic features for automatic paragraph segmentation'. Together they form a unique fingerprint.Profiles
-
Catherine Lai
- School of Philosophy, Psychology and Language Sciences - Lecturer in Speech and Language Processing
- Institute of Language, Cognition and Computation
- Centre for Speech Technology Research
Person: Academic: Research Active