Dictionary Subselection Using an Overcomplete Joint Sparsity Model

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Many natural signals exhibit a sparse representation, whenever a suitable describing model is given. Here, a linear generative model is considered, where many sparsity-based signal processing techniques rely on such a simplified model. As this model is often unknown for many classes of the signals, we need to select such a model based on the domain knowledge or using some exemplar signals. This paper presents a new exemplar based approach for the linear model (called the dictionary) selection, for such sparse inverse problems. The problem of dictionary selection, which has also been called the dictionary learning in this setting, is first reformulated as a joint sparsity model. The joint sparsity model here differs from the standard joint sparsity model as it considers an overcompleteness in the representation of each signal, within the range of selected subspaces. The new dictionary selection paradigm is examined with some synthetic and realistic simulations.
Original languageEnglish
Pages (from-to)4547 - 4556
Number of pages10
JournalIEEE Transactions on Signal Processing
Volume62
Issue number17
Early online date10 Jul 2014
DOIs
Publication statusPublished - 1 Sept 2014

Keywords / Materials (for Non-textual outputs)

  • sparse approximation
  • Compressed sensing (CS)
  • Dictionary Learning

Fingerprint

Dive into the research topics of 'Dictionary Subselection Using an Overcomplete Joint Sparsity Model'. Together they form a unique fingerprint.

Cite this