Edinburgh Research Explorer

Regularized Subspace Gausian Mixture Models for Speech Recognition

Research output: Contribution to journalArticle

Original languageEnglish
Pages (from-to)419-422
Number of pages4
JournalIEEE Signal Processing Letters
Volume18
Issue number7
Publication statusPublished - 2011

Abstract

Subspace Gaussian mixture models (SGMMs) provide a compact representation of the Gaussian parameters in an acoustic model, but may still suffer from over-fitting with insufficient training data. In this letter, the SGMM state parameters are estimated using a penalized maximum-likelihood objective, based on $1$ and $2$ regularization, as well as their combination, referred to as the elastic net, for robust model estimation. Experiments on the 5000-word Wall Street Journal transcription task show word error rate reduction and improved model robustness with regularization.

ID: 27331307