Visual Information in Semantic Representation

Yansong Feng, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.
Original languageEnglish
Title of host publicationHuman Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL
PublisherAssociation for Computational Linguistics
Pages91-99
Number of pages9
Publication statusPublished - 2010

Fingerprint Dive into the research topics of 'Visual Information in Semantic Representation'. Together they form a unique fingerprint.

Cite this