Dictionary learning for sparse approximations with the majorization method

M. Yaghoobi, T. Blumensath, M.E. Davies

Research output: Contribution to journalArticlepeer-review

Abstract

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [ e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.
Original languageEnglish
Pages (from-to)2178-2191
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume57
Issue number6
DOIs
Publication statusPublished - Jun 2009

Fingerprint

Dive into the research topics of 'Dictionary learning for sparse approximations with the majorization method'. Together they form a unique fingerprint.

Cite this