We use Bayesian statistics to study the dictionary learning problem in which an over-complete generative signal model has to be adapted for optimally sparse signal representations. With such a formulation we develop a stochastic gradient learning algorithm based on importance sampling techniques to minimise the negative marginal log-likelihood. As this likelihood is not available analytically, approximations have to be utilised. The importance sampling Monte Carlo marginalisation proposed here improves on previous methods and addresses three main issues: (1) bias of the gradient estimate; (2) multi-modality of the distribution to be approximated; and (3) computational efficiency. Experimental results show the advantages of the new method when compared to previous techniques. The gained efficiency allows the treatment of large scale problems in a statistically sound framework as demonstrated here by the extraction of individual piano notes from a polyphonic piano recording.
|Publication status||Published - 2005|
|Event||IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '05) - Philadelphia, United States|
Duration: 18 Mar 2005 → 23 Mar 2005
|Conference||IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '05)|
|Period||18/03/05 → 23/03/05|