A variational method for learning sparse Bayesian regression

Mingjun Zhong

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, comparing with the Gaussian prior, the Laplacian distribution which is a sparse distribution is employed as the weight prior in the relevance vector machine (RVM) which is a method for learning sparse regression and classification. In order to derive an expectation–maximization (EM) algorithm in closed form for learning the weights, a strict lower bound on the sparse distribution is employed in this paper. This strict lower bound conveniently gives a strict lower bound in Gaussian form for the weight posterior and thus naturally derives an EM algorithm in closed form for learning the weights and the hyperparameters.
Original languageEnglish
Pages (from-to)2351 - 2355
Number of pages5
JournalNeurocomputing
Volume69
Issue number16–18
DOIs
Publication statusPublished - Oct 2006

Keywords / Materials (for Non-textual outputs)

  • Bayesian regression

Fingerprint

Dive into the research topics of 'A variational method for learning sparse Bayesian regression'. Together they form a unique fingerprint.

Cite this