A Fast Importance Sampling Algorithm for Unsupervised Learning of Over-Complete Dictionaries

T. Blumensath, M. E. Davies

Research output: Contribution to conferencePaper

Abstract

We use Bayesian statistics to study the dictionary learning problem in which an over-complete generative signal model has to be adapted for optimally sparse signal representations. With such a formulation we develop a stochastic gradient learning algorithm based on importance sampling techniques to minimise the negative marginal log-likelihood. As this likelihood is not available analytically, approximations have to be utilised. The importance sampling Monte Carlo marginalisation proposed here improves on previous methods and addresses three main issues: (1) bias of the gradient estimate; (2) multi-modality of the distribution to be approximated; and (3) computational efficiency. Experimental results show the advantages of the new method when compared to previous techniques. The gained efficiency allows the treatment of large scale problems in a statistically sound framework as demonstrated here by the extraction of individual piano notes from a polyphonic piano recording.
Original languageEnglish
Pagesv/213-v/216
DOIs
Publication statusPublished - 2005
EventIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '05) - Philadelphia, United States
Duration: 18 Mar 200523 Mar 2005

Conference

ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '05)
Country/TerritoryUnited States
CityPhiladelphia
Period18/03/0523/03/05

Fingerprint

Dive into the research topics of 'A Fast Importance Sampling Algorithm for Unsupervised Learning of Over-Complete Dictionaries'. Together they form a unique fingerprint.

Cite this