Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems

Anthony Bourrier, Michael Davies, Tomer Peleg, Patrick Pérez, Rémi Gribonval

Research output: Contribution to journalArticlepeer-review


The primary challenge in linear inverse problems is to design stable and robust decoders to reconstruct high-dimensional vectors from a low-dimensional observation through a linear operator. Sparsity, low-rank, and related assumptions are typically exploited to design decoders, whose performance is then bounded based on some measure of deviation from the idealized model, typically using a norm. This paper focuses on characterizing the fundamental performance limits that can be expected from an ideal decoder given a general model, i.e., a general subset of simple vectors of interest. First, we extend the so-called notion of instance optimality of a decoder to settings where one only wishes to reconstruct some part of the original high-dimensional vector from a low-dimensional observation. This covers practical settings, such as medical imaging of a region of interest, or audio source separation, when one is only interested in estimating the contribution of a specific instrument to a musical recording. We define instance optimality relatively to a model much beyond the traditional framework of sparse recovery, and characterize the existence of an instance optimal decoder in terms of joint properties of the model and the considered linear operator. Noiseless and noise-robust settings are both considered. We show somewhat surprisingly that the existence of noise-aware instance optimal decoders for all noise levels implies the existence of a noise-blind decoder. A consequence of our results is that for models that are rich enough to contain an orthonormal basis, the existence of an ℓ 2 /ℓ 2 instance optimal decoder is only possible when the linear operator is not substantially dimension-reducing. This covers well-known cases (sparse vectors, low-rank matrices) as well as a number of seemingly new situations (structured sparsity and sparse inverse covariance matrices for instance). We exhibit an operator-dependent norm which, under a model-specific generalization of the restricted isometry property, always yields a feasible instance optimality property. This norm can be upper bounded by an atomic norm relative to the considered model.
Original languageEnglish
Pages (from-to)7928 - 7946
JournalIEEE Transactions on Information Theory
Issue number12
Publication statusPublished - 22 Oct 2014


  • linear inverse problems
  • instance optimality
  • null space property
  • restricted isometry property


Dive into the research topics of 'Fundamental Performance Limits for Ideal Decoders in High-Dimensional Linear Inverse Problems'. Together they form a unique fingerprint.

Cite this