Projects per year
Abstract
In this article, we introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description, and text illustration. We introduce a new dataset, which we call VerSe (short for Verb Sense) that augments existing multimodal datasets (COCO and TUHOI) with verb and sense labels. We explore supervised and unsupervised models for the sense disambiguation task using textual, visual, and multimodal embeddings. We also consider a scenario in which we must detect the verb depicted in an image prior to predicting its sense (i.e., there is no verbal information associated with the image). We find that textual embeddings perform well when gold-standard annotations (object labels and image descriptions) are available, while multimodal embeddings perform well on unannotated images. VerSe is publicly available at https://github.com/spandanagella/verse.
Original language | English |
---|---|
Pages (from-to) | 311 - 322 |
Number of pages | 12 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 41 |
Issue number | 2 |
Early online date | 27 Dec 2017 |
DOIs | |
Publication status | Published - 1 Feb 2019 |
Fingerprint
Dive into the research topics of 'Disambiguating Visual Verbs'. Together they form a unique fingerprint.Projects
- 1 Finished
-
TransModal: Translating from Multiple Modalities into Text
Lapata, M. (Principal Investigator)
1/09/16 → 31/08/22
Project: Research
Profiles
-
Frank Keller
- School of Informatics - Personal Chair in Computational Cognitive Science
- Institute of Language, Cognition and Computation
- Language, Interaction, and Robotics
Person: Academic: Research Active