Abstract
Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10% is observed for both Improved Dense Trajectories and Two- Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4% increase in overall accuracy, and an increase in accuracy for 55% of classes when Rubicon Boundaries are used for temporal annotations.
Original language | English |
---|---|
Title of host publication | 2017 IEEE International Conference on Computer Vision (ICCV) |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 2905-2913 |
Number of pages | 9 |
ISBN (Electronic) | 978-1-5386-1032-9 |
ISBN (Print) | 978-1-5386-1033-6 |
DOIs | |
Publication status | Published - 1 Oct 2017 |
Event | 2017 IEEE International Conference on Computer Vision - Venice, Italy Duration: 22 Oct 2017 → 29 Oct 2017 http://iccv2017.thecvf.com/ |
Publication series
Name | International Conference on Computer Vision (ICCV) |
---|---|
Publisher | IEEE |
ISSN (Electronic) | 2380-7504 |
Conference
Conference | 2017 IEEE International Conference on Computer Vision |
---|---|
Abbreviated title | ICCV 2017 |
Country/Territory | Italy |
City | Venice |
Period | 22/10/17 → 29/10/17 |
Internet address |