Incremental Sigmoid Belief Networks for Grammar Learning

James Henderson, Ivan Titov

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

We propose a class of Bayesian networks appropriate for structured prediction problems where the Bayesian network's model structure is a function of the predicted output structure. These incremental sigmoid belief networks (ISBNs) make decoding possible because inference with partial output structures does not require summing over the unboundedly many compatible model structures, due to their directed edges and incrementally specified model structure. ISBNs are specifically targeted at challenging structured prediction problems such as natural language parsing, where learning the domain's complex statistical dependencies benefits from large numbers of latent variables. While exact inference in ISBNs with large numbers of latent variables is not tractable, we propose two efficient approximations. First, we demonstrate that a previous neural network parsing model can be viewed as a coarse mean-field approximation to inference with ISBNs. We then derive a more accurate but still tractable variational approximation, which proves effective in artificial experiments. We compare the effectiveness of these models on a benchmark natural language parsing task, where they achieve accuracy competitive with the state-of-the-art. The model which is a closer approximation to an ISBN has better parsing accuracy, suggesting that ISBNs are an appropriate abstract model of natural language grammar learning.
Original languageEnglish
Pages (from-to)3541-3570
Number of pages30
JournalJournal of Machine Learning Research
Publication statusPublished - 3 Jan 2010


Dive into the research topics of 'Incremental Sigmoid Belief Networks for Grammar Learning'. Together they form a unique fingerprint.

Cite this