Asynchronous Articulatory Feature Recognition Using Dynamic Bayesian Networks

M. Wester, J. Frankel, S. King

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper builds on previous work where dynamic Bayesian networks (DBN) were proposed as a model for articulatory feature recognition. Using DBNs makes it possible to model the dependencies between features, an addition to previous approaches which was found to improve feature recognition performance. The DBN results were promising, giving close to the accuracy of artificial neural nets (ANNs). However, the system was trained on canonical labels, leading to an overly strong set of constraints on feature co-occurrence. In this study, we describe an embedded training scheme which learns a set of data-driven asynchronous feature changes where supported in the data. Using a subset of the OGI Numbers corpus, we describe articulatory feature recognition experiments using both canonically-trained and asynchronous DBNs. Performance using DBNs is found to exceed that of ANNs trained on an identical task, giving a higher recognition accuracy. Furthermore, inter-feature dependencies result in a more structured model, giving rise to fewer feature combinations in the recognition output. In addition to an empirical evaluation of this modelling approach, we give a qualitative analysis, comparing asynchrony found through our data-driven methods to the asynchrony which may be expected on the basis of linguistic knowledge.
Original languageEnglish
Title of host publicationProc. IEICI Beyond HMM Workshop
Publication statusPublished - 1 Dec 2004


Dive into the research topics of 'Asynchronous Articulatory Feature Recognition Using Dynamic Bayesian Networks'. Together they form a unique fingerprint.

Cite this