The question of how meaning might be acquired by young children and represented by adult speakers of a language is one of the most debated topics in cognitive science. Existing semantic representation models are primarily amodal based on information provided by the linguistic input despite ample evidence indicating that the cognitive system is also sensitive to perceptual information. In this work we exploit the vast resource of images and associated documents available on the web and develop a model of multimodal meaning representation which is based on the linguistic and visual context. Experimental results show that a closer correspondence to human data can be obtained by taking the visual modality into account.
|Title of host publication||Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL|
|Publisher||Association for Computational Linguistics|
|Number of pages||9|
|Publication status||Published - 2010|