Models of Semantic Representation with Visual Attributes

Carina Silberer, Vittorio Ferrari, Mirella Lapata

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the problem of grounding the meaning of words in the physical world and focus on the visual modality which we represent by visual attributes. We create a new large-scale taxonomy of visual attributes covering more than 500 concepts and their corresponding 688K images. We use this dataset to train attribute classifiers and integrate their predictions with text-based distributional models of word meaning. We show that these bimodal models give a better fit to human word association data compared to amodal models and word representations based on hand-crafted norming data.
Original languageEnglish
Title of host publicationProceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Place of PublicationSofia, Bulgaria
PublisherAssociation for Computational Linguistics
Pages572-582
Number of pages11
Publication statusPublished - 1 Aug 2013

Fingerprint

Dive into the research topics of 'Models of Semantic Representation with Visual Attributes'. Together they form a unique fingerprint.

Cite this