Incremental Bayesian Learning of Semantic Categories

Lea Frermann, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper we focus on categories acquired from natural language stimuli, that is words (e.g., chair is a member of the FURNITURE category). We present a Bayesian model which, unlike previous work, learns both categories and their features in a single process. Our model employs particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference in an incremental setting. Comparison against a state-of-the-art graph-based approach reveals that our model learns qualitatively better categories and demonstrates cognitive plausibility during learning.
Original languageEnglish
Title of host publicationProceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics
Place of PublicationGothenburg, Sweden
PublisherAssociation for Computational Linguistics
Pages249-258
Number of pages10
ISBN (Print)978-1-937284-78-7
Publication statusPublished - 1 Apr 2014

Fingerprint Dive into the research topics of 'Incremental Bayesian Learning of Semantic Categories'. Together they form a unique fingerprint.

Cite this