Towards Multimodal Deep Learning for Activity Recognition on Mobile Devices

Valentin Radu, Nicholas D. Lane, Sourav Bhattacharya, Cecilia Mascolo, Mahesh K. Marina, Fahim Kawsar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Current smartphones and smartwatches come equipped with a variety of sensors, from light sensor and inertial sensors to radio interfaces, enabling applications running on these devices to make sense of their surrounding environment. Rather than using sensors independently, combining their sensing capabilities facilitates more interesting and complex applications to emerge (e.g., user activity recognition). But differences between sensors ranging from sampling rate to data generation model (event triggered or continuous sampling) make integration of sensor streams challenging. Here we investigate the opportunity to use deep learning to perform this integration of sensor data from multiple sensors. The intuition is that neural networks can identify nonintuitive features largely from cross-sensor correlations which can result in a more accurate estimation. Initial results with a variant of a Restricted Boltzmann Machine (RBM), show better performance with this new approach compared to classic solutions.
Original languageEnglish
Title of host publicationProceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
Place of PublicationNew York, NY, USA
PublisherACM
Pages185-188
Number of pages4
ISBN (Print)978-1-4503-4462-3
DOIs
Publication statusPublished - 12 Sept 2016
EventThe 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2016), colocated with ISWC 2016 - Heidelberg, Germany
Duration: 12 Sept 201616 Sept 2016

Publication series

NameUbiComp '16
PublisherACM

Conference

ConferenceThe 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2016), colocated with ISWC 2016
Country/TerritoryGermany
CityHeidelberg
Period12/09/1616/09/16

Keywords / Materials (for Non-textual outputs)

  • activity recognition, context detection, deep learning, mobile sensing, multimodal sensing

Fingerprint

Dive into the research topics of 'Towards Multimodal Deep Learning for Activity Recognition on Mobile Devices'. Together they form a unique fingerprint.

Cite this