Fast visual word quantization via spatial neighborhood boosting

Ruixin Xu, Miaojing Shi, Bo Geng, Chao Xu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

With the rapid development of bag-of-visual-word model and its wide-spread applications in various computer vision problems such as visual recognition, image retrieval tasks, etc., fast visual word assignment becomes increasingly important, especially for some on-line services and large scale settings. The conventional approximate nearest neighbor mapping techniques purely consider the distribution of image local descriptors in the visual feature space and perform the mapping process independently for each descriptor. In this paper, we propose to involve the spatial correlation information to boost the efficiency of feature quantization. The visual words that frequently co-occur in the same local region of a large number of images are considered as spatial neighborhoods, which can be leveraged to boost the approximate mapping of neighbored local descriptors. Experimental results on a well-known image retrieval dataset demonstrate that, the proposed method is capable of improving the efficiency and precision of visual word assignment.
Original languageEnglish
Title of host publicationMultimedia and Expo (ICME), 2011 IEEE International Conference on
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1-6
Number of pages6
ISBN (Print)978-1-61284-348-3
DOIs
Publication statusPublished - 1 Jul 2011

Keywords / Materials (for Non-textual outputs)

  • Image Retrieval
  • Spatial Correlation
  • Visual Words

Fingerprint

Dive into the research topics of 'Fast visual word quantization via spatial neighborhood boosting'. Together they form a unique fingerprint.

Cite this