Learning in Markov Random Fields with Contrastive Free Energies

Max Welling, Charles Sutton

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Learning Markov random field (MRF) models is notoriously hard due to the presence of a global normalization factor. In this paper we present a new framework for learning MRF models based on the contrastive free energy (CF) objective function. In this scheme the parameters are updated in an attempt to match the average statistics of the data distribution and a distribution which is (partially or approximately) “relaxed” to the equilibrium distribution. We show that maximum likelihood, mean field, contrastive divergence and pseudo-likelihood objectives can be understood in this paradigm. Moreover, we propose and study a new learning algorithm: the “k-step Kikuchi/Bethe approximation”. This algorithm is then tested on a conditional random field model with “skip-chain” edges to model long range interactions in text data. It is demonstrated that with no loss in accuracy, the training time is brought down on average from 19 hours (BP based learning) to 83 minutes, an order of magnitude improvement.
Original languageEnglish
Title of host publicationTenth International Workshop on Artificial Intelligence and Statistics
Number of pages8
Publication statusPublished - 2005

Fingerprint

Dive into the research topics of 'Learning in Markov Random Fields with Contrastive Free Energies'. Together they form a unique fingerprint.

Cite this