We present FLIPDIAL, a generative model for Visual Dialogue that simultaneously plays the role of both participants in a visually-grounded dialogue. Given context in the form of an image and an associated caption summarising the contents of the image, FLIPDIAL learns both to answer questions and put forward questions, capable of generating entire sequences of dialogue (question-answer pairs) which are diverse and relevant to the image. To do this, FLIPDIAL relies on a simple but surprisingly powerful idea: it uses convolutional neural networks (CNNs) to encode entire dialogues directly, implicitly capturing dialogue context, and conditional VAEs to learn the generative model, FLIPDIAL outperforms the state-of-the-art model in the sequential answering task (1VD) on the VisDial dataset by 5 points in Mean Rank using the generated answers. We are the first to extend this paradigm to full two-way visual dialogue (2VD), where our model is capable of generating both questions and answers in sequence based on a visual input, for which we propose a set of novel evaluation measures and metrics.
|Title of host publication||2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition|
|Publisher||Institute of Electrical and Electronics Engineers (IEEE)|
|Number of pages||9|
|Publication status||Published - 17 Dec 2018|
|Event||2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - Salt Lake City, United States|
Duration: 18 Jun 2018 → 22 Jun 2018
|Conference||2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition|
|Abbreviated title||CVPR 2018|
|City||Salt Lake City|
|Period||18/06/18 → 22/06/18|
FingerprintDive into the research topics of 'FLIPDIAL: A Generative Model for Two-Way Visual Dialogue'. Together they form a unique fingerprint.
- School of Informatics - Reader in Explainable Artificial Intelligence
- Artificial Intelligence and its Applications Institute
- Data Science and Artificial Intelligence
Person: Academic: Research Active