An analysis of contrastive divergence learning in gaussian boltzmann machines

Research output: Working paper

Abstract

The Boltzmann machine (BM) learning rule for random field models with latent variables can be problematic to use in practice. These problems have (at least partially) been attributed to the negative phase in BM learning where a Gibbs sampling chain should be run to equilibrium. Hinton (1999, 2000) has introduced an alternative called contrastive divergence (CD) learning where the chain is run for only 1 step. In this paper we analyse the mean and variance of the parameter update obtained after # steps of Gibbs sampling for a simple Gaussian BM. For this model our analysis shows that CD learning produces (as expected) a biased estimate of the true parameter update. We also show that the variance does usually increase with # and quantify this behaviour.
Original languageEnglish
Pages1-14
Number of pages14
Publication statusPublished - 2002

Fingerprint

Dive into the research topics of 'An analysis of contrastive divergence learning in gaussian boltzmann machines'. Together they form a unique fingerprint.

Cite this