Online Learning Mechanisms for Bayesian Models of Word Segmentation

Lisa Pearl, Sharon Goldwater, Mark Steyvers

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

In recent years, Bayesian models have become increasingly popular as a way of understanding human cognition. Ideal learner Bayesian models assume that cognition can be usefully understood as optimal behavior under uncertainty, a hypothesis that has been supported by a number of modeling studies across various domains (e.g., Griffiths and Tenenbaum, Cognitive Psychology, 51, 354â€″384, 2005 ; Xu and Tenenbaum, Psychological Review, 114, 245â€″272, 2007 ). The models in these studies aim to explain why humans behave as they do given the task and data they encounter, but typically avoid some questions addressed by more traditional psychological models, such as how the observed behavior is produced given constraints on memory and processing. Here, we use the task of word segmentation as a case study for investigating these questions within a Bayesian framework. We consider some limitations of the infant learner, and develop several online learning algorithms that take these limitations into account. Each algorithm can be viewed as a different method of approximating the same ideal learner. When tested on corpora of English child-directed speech, we find that the constrained learner’s behavior depends non-trivially on how the learner’s limitations are implemented. Interestingly, sometimes biases that are helpful to an ideal learner hinder a constrained learner, and in a few cases, constrained learners perform equivalently or better than the ideal learner. This suggests that the transition from a computational-level solution for acquisition to an algorithmic-level one is not straightforward.
Original languageEnglish
Pages (from-to)107-132
Number of pages26
JournalResearch on Language and Computation
Volume8
DOIs
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Online Learning Mechanisms for Bayesian Models of Word Segmentation'. Together they form a unique fingerprint.

Cite this