Disambiguating Visual Verbs

Spandana Gella, Frank Keller, Mirella Lapata

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we introduce a new task, visual sense disambiguation for verbs: given an image and a verb, assign the correct sense of the verb, i.e., the one that describes the action depicted in the image. Just as textual word sense disambiguation is useful for a wide range of NLP tasks, visual sense disambiguation can be useful for multimodal tasks such as image retrieval, image description, and text illustration. We introduce a new dataset, which we call VerSe (short for Verb Sense) that augments existing multimodal datasets (COCO and TUHOI) with verb and sense labels. We explore supervised and unsupervised models for the sense disambiguation task using textual, visual, and multimodal embeddings. We also consider a scenario in which we must detect the verb depicted in an image prior to predicting its sense (i.e., there is no verbal information associated with the image). We find that textual embeddings perform well when gold-standard annotations (object labels and image descriptions) are available, while multimodal embeddings perform well on unannotated images. VerSe is publicly available at https://github.com/spandanagella/verse.
Original languageEnglish
Pages (from-to)311 - 322
Number of pages12
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number2
Early online date27 Dec 2017
DOIs
Publication statusPublished - 1 Feb 2019

Fingerprint

Dive into the research topics of 'Disambiguating Visual Verbs'. Together they form a unique fingerprint.

Cite this