Continuous Speech Recognition Using Articulatory Data

A. Wrench, K. Richmond

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper we show that there is measurable information in the articulatory system which can help to disambiguate the acoustic signal. We measure directly the movement of the lips, tongue, jaw, velum and larynx and parameterise this articulatory feature space using principle components analysis. The parameterisation is developed and evaluated using a speaker dependent phone recognition task on a specially recorded TIMIT corpus of 460 sentences. The results show that there is useful supplementary information contained in the articulatory data which yields a small but significant improvement in phone recognition accuracy of 2 However, preliminary attempts to estimate the articulatory data from the acoustic signal and use this to supplement the acoustic input have not yielded any significant improvement in phone accuracy.
Original languageEnglish
Title of host publicationSixth International Conference on Spoken Language Processing (ICSLP 2000)
PublisherInternational Speech Communication Association
Pages145-148
Number of pages4
Publication statusPublished - 2000

Fingerprint

Dive into the research topics of 'Continuous Speech Recognition Using Articulatory Data'. Together they form a unique fingerprint.

Cite this