Emerging Topics in Learning from Noisy and Missing Data

Xavier Alameda-Pineda, Timothy M. Hospedales, Elisa Ricci, Nicu Sebe, Xiaogang Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

While vital for handling most multimedia and computer vision problems, collecting large scale fully annotated datasets is a resource-consuming, often unaffordable task. Indeed, on the one hand datasets need to be large and variate enough so that learning strategies can successfully exploit the variability inherently present in real data, but on the other hand they should be small enough so that they can be fully annotated at a reasonable cost. With the overwhelming success of (deep) learning methods, the traditional problem of balancing between dataset dimensions and resources needed for annotations became a full-fledged dilemma. In this context, methodological approaches able to deal with partially described data sets represent a one-of-a-kind opportunity to find the right balance between data variability and resource-consumption in annotation. These include methods able to deal with noisy, weak or partial annotations. In this tutorial we will present several recent methodologies addressing different visual tasks under the assumption of noisy, weakly annotated data sets.
Original languageEnglish
Title of host publicationProceedings of the 2016 ACM on Multimedia Conference
Place of PublicationNew York, NY, USA
PublisherACM
Pages1469-1470
Number of pages2
ISBN (Print)978-1-4503-3603-1
DOIs
Publication statusPublished - 1 Oct 2016
EventACM MULTIMEDIA CONFERENCE 2016 - Amsterdam, Netherlands
Duration: 15 Oct 201619 Oct 2016
http://www.acmmm.org/2016/

Publication series

NameMM '16
PublisherACM

Conference

ConferenceACM MULTIMEDIA CONFERENCE 2016
Country/TerritoryNetherlands
CityAmsterdam
Period15/10/1619/10/16
Internet address

Fingerprint

Dive into the research topics of 'Emerging Topics in Learning from Noisy and Missing Data'. Together they form a unique fingerprint.

Cite this