A Probabilistic Incremental Proximal Gradient Method

Omer Deniz Akyildiz*, Emilie Chouzenoux, Victor Elvira, Joaquin Miguez

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this letter, we propose a probabilistic optimization method, named probabilistic incremental proximal gradient (PIPG) method, by developing a probabilistic interpretation of the incremental proximal gradient algorithm. We explicitly model the update rules of the incremental proximal gradient method and develop a systematic approach to propagate the uncertainty of the solution estimate over iterations. The PIPG algorithm takes the form of Bayesian filtering updates for a state-space model constructed by using the cost function. Our framework makes it possible to utilize well-known exact or approximate Bayesian filters, such as Kalman or extended Kalman filters, to solve large-scale regularized optimization problems.

Original languageEnglish
Article number8755451
Pages (from-to)1257-1261
Number of pages5
JournalIEEE Signal Processing Letters
Volume26
Issue number8
Early online date4 Jul 2019
DOIs
Publication statusPublished - 1 Aug 2019

Keywords / Materials (for Non-textual outputs)

  • extended Kalman filtering
  • Probabilistic optimization
  • proximal algorithms
  • stochastic gradient

Fingerprint

Dive into the research topics of 'A Probabilistic Incremental Proximal Gradient Method'. Together they form a unique fingerprint.

Cite this