Inducing Tree-Substitution Grammars

Trevor Cohn, Phil Blunsom, Sharon Goldwater

Research output: Contribution to journalArticlepeer-review

Abstract

Inducing a grammar from text has proven to be a notoriously challenging learning task despite decades of research. The primary reason for its difficulty is that in order to induce plausible grammars, the underlying model must be capable of representing the intricacies of language while also ensuring that it can be readily learned from data. The majority of existing work on grammar induction has favoured model simplicity (and thus learnability) over representational capacity by using context free grammars and first order dependency grammars, which are not sufficiently expressive to model many common linguistic constructions. We propose a novel compromise by inferring a probabilistic tree substitution grammar, a formalism which allows for arbitrarily large tree fragments and thereby better represent complex linguistic structures. To limit the model's complexity we employ a Bayesian non-parametric prior which biases the model towards a sparse grammar with shallow productions. We demonstrate the model's efficacy on supervised phrase-structure parsing, where we induce a latent segmentation of the training treebank, and on unsupervised dependency grammar induction. In both cases the model uncovers interesting latent linguistic structures while producing competitive results.
Original languageEnglish
Pages (from-to)3053-3096
Number of pages44
JournalJournal of Machine Learning Research
Volume11
Publication statusPublished - Dec 2010

Keywords

  • grammar induction
  • tree substitution grammar
  • Bayesian non-parametrics
  • Pitman-Yor process
  • Chinese restaurant process

Fingerprint

Dive into the research topics of 'Inducing Tree-Substitution Grammars'. Together they form a unique fingerprint.

Cite this