Grounded Models of Semantic Representation

Carina Silberer, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

A popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment, despite ample evidence suggesting that language is grounded in perception and action. In this paper we present a comparative study of models that represent word meaning based on linguistic and perceptual data. Linguistic information is approximated by naturally occurring corpora and sensorimotor experience by feature norms (i.e., attributes native speakers consider important in describing the meaning of a word). The models differ in terms of the mechanisms by which they integrate the two modalities. Experimental results show that a closer correspondence to human data can be obtained by uncovering latent information shared among the textual and perceptual modalities rather than arriving at semantic knowledge by concatenating the two.
Original languageEnglish
Title of host publicationProceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
PublisherAssociation for Computational Linguistics
Pages1423-1433
Number of pages11
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Grounded Models of Semantic Representation'. Together they form a unique fingerprint.

Cite this