Variational Learning in Graphical Models and Neural Networks

Christopher Bishop

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Variational methods are becoming increasingly popular for inference and learning in probabilistic models. By providing bounds on quantities of interest, they offer a more controlled approximation framework than techniques such as Laplace’s method, while avoiding the mixing and convergence issues of Markov chain Monte Carlo methods, or the possible computational intractability of exact algorithms. In this paper we review the underlying framework of variational methods and discuss example applications involving sigmoid belief networks, Boltzmann machines and feed-forward neural networks.
Original languageEnglish
Title of host publicationICANN 98
Subtitle of host publicationProceedings of the 8th International Conference on Artificial Neural Networks, Skövde, Sweden, 2–4 September 1998
EditorsLars Niklasson, Mikael Boden, Tom Ziemke
PublisherSpringer London
Number of pages10
ISBN (Electronic)978-1-4471-1599-1
ISBN (Print)978-3-540-76263-8
Publication statusPublished - 1998

Publication series

NamePerspectives in Neural Computing
PublisherSpringer London
ISSN (Print)1431-6854


Dive into the research topics of 'Variational Learning in Graphical Models and Neural Networks'. Together they form a unique fingerprint.

Cite this