SEMBED: Semantic Embedding of Egocentric Action Videos

Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas, Dima Damen

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We present SEMBED, an approach for embedding an egocentric object interaction video in a semantic-visual graph to estimate the probability distribution over its potential semantic labels. When object interactions are annotated using unbounded choice of verbs, we embrace the wealth and ambiguity of these labels by capturing the semantic relationships as well as the visual similarities over motion and appearance features. We show how SEMBED can interpret a challenging dataset of 1225 freely annotated egocentric videos, outperforming SVM classification by more than 5 %.
Original languageEnglish
Title of host publicationComputer Vision -- ECCV 2016 Workshops
Subtitle of host publicationAmsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I
EditorsGang Hua, Hervé Jégou
Place of PublicationCham
PublisherSpringer International Publishing AG
Number of pages14
ISBN (Electronic)978-3-319-46604-0
ISBN (Print)978-3-319-46603-3
Publication statusPublished - 18 Sep 2016
EventEuropean Conference on Computer Vision 2016 Workshops - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016

Publication series

NameLecture Notes in Computer Science
PublisherSpringer, Cham
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferenceEuropean Conference on Computer Vision 2016 Workshops
Abbreviated titleECCV 2016
Internet address


  • Egocentric action recognition
  • Semantic ambiguity
  • Semantic embedding


Dive into the research topics of 'SEMBED: Semantic Embedding of Egocentric Action Videos'. Together they form a unique fingerprint.

Cite this