Regularized Subspace Gausian Mixture Models for Speech Recognition

L. Lu, A. Ghoshal, S. Renals

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Subspace Gaussian mixture models (SGMMs) provide a compact representation of the Gaussian parameters in an acoustic model, but may still suffer from over-fitting with insufficient training data. In this letter, the SGMM state parameters are estimated using a penalized maximum-likelihood objective, based on $1$ and $2$ regularization, as well as their combination, referred to as the elastic net, for robust model estimation. Experiments on the 5000-word Wall Street Journal transcription task show word error rate reduction and improved model robustness with regularization.
Original languageEnglish
Pages (from-to)419-422
Number of pages4
JournalIEEE Signal Processing Letters
Issue number7
Publication statusPublished - 2011


Dive into the research topics of 'Regularized Subspace Gausian Mixture Models for Speech Recognition'. Together they form a unique fingerprint.

Cite this