Behavior Discovery and Alignment of Articulated Object Classes from Unstructured Video

Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari

Research output: Contribution to journalArticlepeer-review


We propose an automatic system for organizing the content of a collection of unstructured videos of an articulated object class (e.g., tiger, horse). By exploiting the recurring motion patterns of the class across videos, our system: (1) identifies its characteristic behaviors, and (2) recovers pixel-to-pixel alignments across different instances. Our system can be useful for organizing video collections for indexing and retrieval. Moreover, it can be a platform for learning the appearance or behaviors of object classes from Internet video. Traditional supervised techniques cannot exploit this wealth of data directly, as they require a large amount of time-consuming manual annotations. The behavior discovery stage generates temporal video intervals, each automatically trimmed to one instance of the discovered behavior, clustered by type. It relies on our novel motion representation for articulated motion based on the displacement of ordered pairs of trajectories. The alignment stage aligns hundreds of instances of the class to a great accuracy despite considerable appearance variations (e.g., an adult tiger and a cub). It uses a flexible thin plate spline deformation model that can vary through time. We carefully evaluate each step of our system on a new, fully annotated dataset. On behavior discovery, we outperform the state-of-the-art improved dense trajectory feature descriptor. On spatial alignment, we outperform the popular SIFT Flow algorithm.
Original languageEnglish
Pages (from-to)303-325
Number of pages23
JournalInternational Journal of Computer Vision
Issue number2
Early online date10 Aug 2016
Publication statusPublished - Jan 2017

Fingerprint Dive into the research topics of 'Behavior Discovery and Alignment of Articulated Object Classes from Unstructured Video'. Together they form a unique fingerprint.

Cite this