iVQA: Inverse Visual Question Answering

Feng, Liu, Tao Xiang, Timothy Hospedales, Wankou Yang, Changyin Sun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose the inverse problem of Visual question answering (iVQA), and explore its suitability as a benchmark for visuo-linguistic understanding. The iVQA task is to generate a question that corresponds to a given image and answer pair. Since the answers are less informative than the questions, and the questions have less learnable bias, an iVQA model needs to better understand the image to be successful than a VQA model. We pose question generation as a multi-modal dynamic inference process and propose an iVQA model that can gradually adjust its focus of attention guided by both a partially generated question and the answer. For evaluation, apart from existing linguistic metrics, we propose a new ranking metric. This metric compares the ground truth question’s rank among a list of distractors, which allows the drawbacks of different algorithms and sources of error to be studied. Experimental results show that our model can generate diverse, grammatically correct and content correlated questions that match the given answer.
Original languageEnglish
Title of host publicationComputer Vision and Pattern Recognition 2018
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages8611-8619
Number of pages9
ISBN (Electronic)978-1-5386-6420-9
DOIs
Publication statusPublished - 17 Dec 2018
EventComputer Vision and Pattern Recognition 2018 - Salt Lake City, United States
Duration: 18 Jun 201822 Jun 2018
http://cvpr2018.thecvf.com/
http://cvpr2018.thecvf.com/
http://cvpr2018.thecvf.com/

Publication series

Name
ISSN (Electronic)2575-7075

Conference

ConferenceComputer Vision and Pattern Recognition 2018
Abbreviated titleCVPR 2018
Country/TerritoryUnited States
CitySalt Lake City
Period18/06/1822/06/18
Internet address

Fingerprint

Dive into the research topics of 'iVQA: Inverse Visual Question Answering'. Together they form a unique fingerprint.

Cite this