Edinburgh Research Explorer

Variational Learning in Graphical Models and Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationICANN 98
Subtitle of host publicationProceedings of the 8th International Conference on Artificial Neural Networks, Skövde, Sweden, 2–4 September 1998
EditorsLars Niklasson, Mikael Boden, Tom Ziemke
PublisherSpringer London
Pages13-22
Number of pages10
ISBN (Electronic)978-1-4471-1599-1
ISBN (Print)978-3-540-76263-8
DOIs
Publication statusPublished - 1998

Publication series

NamePerspectives in Neural Computing
PublisherSpringer London
ISSN (Print)1431-6854

Abstract

Variational methods are becoming increasingly popular for inference and learning in probabilistic models. By providing bounds on quantities of interest, they offer a more controlled approximation framework than techniques such as Laplace’s method, while avoiding the mixing and convergence issues of Markov chain Monte Carlo methods, or the possible computational intractability of exact algorithms. In this paper we review the underlying framework of variational methods and discuss example applications involving sigmoid belief networks, Boltzmann machines and feed-forward neural networks.

ID: 23163429