Learning Fast Sparsifying Transforms

Cristian Rusu, John Thompson

Research output: Contribution to journalArticlepeer-review

Abstract

Given a dataset, the task of learning a transform that allows sparse representations of the data bears the name of dictionary learning. In many applications, these learned dictionaries represent the data much better than the static well-known transforms (Fourier, Hadamard etc.). The main downside of learned transforms is that they lack structure and therefore they are not computationally efficient, unlike their classical counterparts. This posses several difficulties especially when using power limited hardware such as mobile devices, therefore discouraging the application of sparsity techniques in such scenarios. In this paper we construct orthonormal and non-orthonormal dictionaries that are factorized as a product of a few basic transformations. In the orthonormal case, we solve exactly the dictionary update problem for one basic transformation, which can be viewed as a generalized Givens rotation, and then propose to construct orthonormal dictionaries that are a product of these transformations, guaranteeing their fast manipulation. We also propose a method to construct fast square but non-orthonormal dictionaries that are factorized as a product of few transforms that can be viewed as a further generalization of Givens rotations to the non-orthonormal setting. We show how the proposed transforms can balance very well data representation performance and computational complexity. We also compare with classical fast and learned general and orthonormal transforms.
Original languageEnglish
Pages (from-to)4367 - 4378
JournalIEEE Transactions on Signal Processing
Volume65
Issue number16
Early online date5 Jun 2017
DOIs
Publication statusPublished - 15 Aug 2017

Fingerprint

Dive into the research topics of 'Learning Fast Sparsifying Transforms'. Together they form a unique fingerprint.

Cite this