TY - GEN
T1 - Towards Multimodal Deep Learning for Activity Recognition on Mobile Devices
AU - Radu, Valentin
AU - Lane, Nicholas D.
AU - Bhattacharya, Sourav
AU - Mascolo, Cecilia
AU - Marina, Mahesh K.
AU - Kawsar, Fahim
PY - 2016/9/12
Y1 - 2016/9/12
N2 - Current smartphones and smartwatches come equipped with a variety of sensors, from light sensor and inertial sensors to radio interfaces, enabling applications running on these devices to make sense of their surrounding environment. Rather than using sensors independently, combining their sensing capabilities facilitates more interesting and complex applications to emerge (e.g., user activity recognition). But differences between sensors ranging from sampling rate to data generation model (event triggered or continuous sampling) make integration of sensor streams challenging. Here we investigate the opportunity to use deep learning to perform this integration of sensor data from multiple sensors. The intuition is that neural networks can identify nonintuitive features largely from cross-sensor correlations which can result in a more accurate estimation. Initial results with a variant of a Restricted Boltzmann Machine (RBM), show better performance with this new approach compared to classic solutions.
AB - Current smartphones and smartwatches come equipped with a variety of sensors, from light sensor and inertial sensors to radio interfaces, enabling applications running on these devices to make sense of their surrounding environment. Rather than using sensors independently, combining their sensing capabilities facilitates more interesting and complex applications to emerge (e.g., user activity recognition). But differences between sensors ranging from sampling rate to data generation model (event triggered or continuous sampling) make integration of sensor streams challenging. Here we investigate the opportunity to use deep learning to perform this integration of sensor data from multiple sensors. The intuition is that neural networks can identify nonintuitive features largely from cross-sensor correlations which can result in a more accurate estimation. Initial results with a variant of a Restricted Boltzmann Machine (RBM), show better performance with this new approach compared to classic solutions.
KW - activity recognition, context detection, deep learning, mobile sensing, multimodal sensing
U2 - 10.1145/2968219.2971461
DO - 10.1145/2968219.2971461
M3 - Conference contribution
SN - 978-1-4503-4462-3
T3 - UbiComp '16
SP - 185
EP - 188
BT - Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct
PB - ACM
CY - New York, NY, USA
T2 - The 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2016), colocated with ISWC 2016
Y2 - 12 September 2016 through 16 September 2016
ER -