On the quantitative analysis of deep belief networks

Ruslan Salakhutdinov, Iain Murray

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep Belief Networks (DBN's) are generative models that contain many layers of hidden variables. Efficient greedy algorithms for learning and approximate inference have allowed these models to be applied successfully in many application domains. The main building block of a DBN is a bipartite undirected graphical model called a restricted Boltzmann machine (RBM). Due to the presence of the partition function, model selection, complexity control, and exact maximum likelihood learning in RBM's are intractable. We show that Annealed Importance Sampling (AIS) can be used to efficiently estimate the partition function of an RBM, and we present a novel AIS scheme for comparing RBM's with different architectures. We further show how an AIS estimator, along with approximate inference, can be used to estimate a lower bound on the log-probability that a DBN model with multiple hidden layers assigns to the test data. This is, to our knowledge, the first step towards obtaining quantitative results that would allow us to directly assess the performance of Deep Belief Networks as generative models of data.
Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Machine Learning (ICML '08)
Place of PublicationNew York, NY, USA
PublisherACM
Pages872-879
Number of pages8
ISBN (Print)978-1-60558-205-4
DOIs
Publication statusPublished - 2008

Fingerprint Dive into the research topics of 'On the quantitative analysis of deep belief networks'. Together they form a unique fingerprint.

Cite this