Imaging with Equivariant Deep Learning: From unrolled network design to fully unsupervised learning

Dongdong Chen, Mike Davies, Matthias J. Ehrhardt, Carola-Bibiane Schönlieb, Ferdia Sherry, Julián Tachella

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

From early image processing to modern computational imaging, successful models and algorithms have relied on a fundamental property of natural signals: symmetry. Here symmetry refers to the invariance property of signal sets to transformations, such as translation, rotation, or scaling. Symmetry can also be incorporated into deep neural networks (DNNs) in the form of equivariance, allowing for more data-efficient learning. While there have been important advances in the design of end-to-end equivariant networks for image classification in recent years, computational imaging introduces unique challenges for equivariant network solutions since we typically only observe the image through some noisy ill-conditioned forward operator that itself may not be equivariant. We review the emerging field of equivariant imaging (EI) and show how it can provide improved generalization and new imaging opportunities. Along the way, we show the interplay between the acquisition physics and group actions and links to iterative reconstruction, blind compressed sensing, and self-supervised learning.

Original languageUndefined/Unknown
Pages (from-to) 134 - 147
JournalIEEE Signal Processing Magazine
Volume40
Issue number1
Early online date2 Jan 2023
DOIs
Publication statusE-pub ahead of print - 2 Jan 2023

Keywords / Materials (for Non-textual outputs)

  • eess.SP
  • cs.CV

Cite this