SEMBED: Semantic Embedding of Egocentric Action Videos

Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 %.
Original languageEnglish
Title of host publicationComputer Vision -- ECCV 2016 Workshops
Subtitle of host publicationAmsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I
EditorsGang Hua, Hervé Jégou
Place of PublicationCham
PublisherSpringer International Publishing AG
Pages532-545
Number of pages14
ISBN (Electronic)978-3-319-46604-0
ISBN (Print)978-3-319-46603-3
DOIs
Publication statusPublished - 18 Sep 2016
EventEuropean Conference on Computer Vision 2016 Workshops - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016
http://www.eccv2016.org/workshops/

Publication series

NameLecture Notes in Computer Science
PublisherSpringer, Cham
Volume9913
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceEuropean Conference on Computer Vision 2016 Workshops
Abbreviated titleECCV 2016
Country/TerritoryNetherlands
CityAmsterdam
Period8/10/1616/10/16
Internet address

Keywords

  • Egocentric action recognition
  • Semantic ambiguity
  • Semantic embedding

Fingerprint

Dive into the research topics of 'SEMBED: Semantic Embedding of Egocentric Action Videos'. Together they form a unique fingerprint.

Cite this