Simultaneous Object Recognition and Segmentation from Single or Multiple Model Views

Vittorio Ferrari, Tinne Tuytelaars, Luc Gool

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

We present a novel Object Recognition approach based on affine invariant regions. It actively counters the problems related to the limited repeatability of the region detectors, and the difficulty of matching, in the presence of large amounts of background clutter and particularly challenging viewing conditions. After producing an initial set of matches, the method gradually explores the surrounding image areas, recursively constructing more and more matching regions, increasingly farther from the initial ones. This process covers the object with matches, and simultaneously separates the correct matches from the wrong ones. Hence, recognition and segmentation are achieved at the same time. The approach includes a mechanism for capturing the relationships between multiple model views and exploiting these for integrating the contributions of the views at recognition time. This is based on an efficient algorithm for partitioning a set of region matches into groups lying on smooth surfaces. Integration is achieved by measuring the consistency of configurations of groups arising from different model views. Experimental results demonstrate the stronger power of the approach in dealing with extensive clutter, dominant occlusion, and large scale and viewpoint changes. Non-rigid deformations are explicitly taken into account, and the approximative contours of the object are produced. All presented techniques can extend any view-point invariant feature extractor.
Original languageEnglish
Pages (from-to)159-188
Number of pages30
JournalInternational Journal of Computer Vision
Volume67
Issue number2
DOIs
Publication statusPublished - 1 Apr 2006

Fingerprint

Dive into the research topics of 'Simultaneous Object Recognition and Segmentation from Single or Multiple Model Views'. Together they form a unique fingerprint.

Cite this