Ensemble learning in Bayesian neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Bayesian treatments of learning in neural networks are typically based either on a local Gaussian approximation to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called `ensemble learning', was introduced by Hinton (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between the true posterior and a parametric approximating distribution. The original derivation of a deterministic algorithm relied on the use of a Gaussian approximating distribution with a diagonal covariance matrix and hence was unable to capture the posterior correlations between parameters. In this chapter we show how the ensemble learning approach can be extended to full-covariance Gaussian distributions while remaining computationally tractable. We also extend the framework to deal with hyperparameters, leading to a simple re-estimation procedure. One of the benefits of our approach is that it yields a strict lower bound on the marginal likelihood, in contrast to other approximate procedures.

Original languageEnglish
Title of host publicationGeneralization in Neural Networks and Machine Learning
PublisherSpringer-Verlag
Pages215-237
Number of pages20
Publication statusPublished - 1 Jan 1998

Fingerprint Dive into the research topics of 'Ensemble learning in Bayesian neural networks'. Together they form a unique fingerprint.

Cite this