Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval

Kaiyue Pang, Yi-Zhe Song, Tao Xiang, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The key challenge for learning a fine-grained sketch-based image retrieval (FG-SBIR) model is to bridge the domain gap between photo and sketch. Existing models learn a deep joint embedding space with discriminative losses where a photo and a sketch can be compared. In this paper, we propose a novel discriminative-generative hybrid model by introducing a generative task of cross-domain image synthesis. This task enforces the learned embedding space to preserve all the domain invariant information that is useful for cross-domain reconstruction, thus explicitly reducing the domain gap as opposed to existing models. Extensive experiments on the largest FG-SBIR dataset Sketchy [19] show that the proposed model significantly outperforms state-of-the-art discriminative FG-SBIR models.
Original languageEnglish
Title of host publicationThe British Machine Vision Conference (BMVC 2017)
Number of pages12
ISBN (Electronic)1-901725-60-X
DOIs
Publication statusE-pub ahead of print - 7 Sept 2017
EventThe 28th British Machine Vision Conference - Imperial College London, London, United Kingdom
Duration: 4 Sept 20177 Sept 2017
https://bmvc2017.london/

Conference

ConferenceThe 28th British Machine Vision Conference
Abbreviated titleBMVC 2017
Country/TerritoryUnited Kingdom
CityLondon
Period4/09/177/09/17
Internet address

Fingerprint

Dive into the research topics of 'Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval'. Together they form a unique fingerprint.

Cite this