Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval

Kaiyue Pang, Yi-Zhe Song, Tao Xiang, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The key challenge for learning a fine-grained sketch-based image retrieval (FG-SBIR) model is to bridge the domain gap between photo and sketch. Existing models learn a deep joint embedding space with discriminative losses where a photo and a sketch can be compared. In this paper, we propose a novel discriminative-generative hybrid model by introducing a generative task of cross-domain image synthesis. This task enforces the learned embedding space to preserve all the domain invariant information that is useful for cross-domain reconstruction, thus explicitly reducing the domain gap as opposed to existing models. Extensive experiments on the largest FG-SBIR dataset Sketchy [19] show that the proposed model significantly outperforms state-of-the-art discriminative FG-SBIR models.
Original languageEnglish
Title of host publicationThe British Machine Vision Conference (BMVC 2017)
Number of pages12
ISBN (Electronic)1-901725-60-X
Publication statusE-pub ahead of print - 7 Sept 2017
EventThe 28th British Machine Vision Conference - Imperial College London, London, United Kingdom
Duration: 4 Sept 20177 Sept 2017


ConferenceThe 28th British Machine Vision Conference
Abbreviated titleBMVC 2017
Country/TerritoryUnited Kingdom
Internet address


Dive into the research topics of 'Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval'. Together they form a unique fingerprint.

Cite this