Abstract
In this paper, comparing with the Gaussian prior, the Laplacian distribution which is a sparse distribution is employed as the weight prior in the relevance vector machine (RVM) which is a method for learning sparse regression and classification. In order to derive an expectation–maximization (EM) algorithm in closed form for learning the weights, a strict lower bound on the sparse distribution is employed in this paper. This strict lower bound conveniently gives a strict lower bound in Gaussian form for the weight posterior and thus naturally derives an EM algorithm in closed form for learning the weights and the hyperparameters.
Original language | English |
---|---|
Pages (from-to) | 2351 - 2355 |
Number of pages | 5 |
Journal | Neurocomputing |
Volume | 69 |
Issue number | 16–18 |
DOIs | |
Publication status | Published - Oct 2006 |
Keywords / Materials (for Non-textual outputs)
- Bayesian regression