Using wearable inertial sensors for posture and position tracking in unconstrained environments through learned translation manifolds

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Despite recent advances in 3-D motion capture, the problem of simultaneously tracking human posture and position in an unconstrained environment remains open. Optical systems provide both types of information, but are confined to a restricted area of capture. Inertial sensing alleviates this restriction, but at the expense of capturing only relative (postural) and not absolute (positional) information. In this paper, we propose an algorithm combining the relative merits of these systems to track both position and posture in challenging environments. Offline, we combine an optical (Kinect) and an inertial sensing (Orient-4) platform to learn a mapping from posture variations to translations, which we encode as a translation manifold. Online, the optical source is removed, and the learned mapping is used to infer positions using the postures computed by the inertial sensors. We first evaluate our approach in simulation, on motion sequences with ground-truth positions for error estimation. Then, the method is deployed on physical sensing platforms to track human subjects. The proposed algorithm is shown to yield a lower average cumulative error than comparable position tracking methods, such as double integration of accelerometer data, on both simulated and real sensory data, and in a variety of motions and capture settings.
Original languageEnglish
Title of host publicationProceedings of the 12th international conference on Information processing in sensor networks
Place of PublicationNew York, NY, USA
PublisherACM
Pages241-252
Number of pages12
ISBN (Print)978-1-4503-1959-1
DOIs
Publication statusPublished - 2013

Keywords

  • manifold learning
  • optical motion capture
  • translation models
  • wearable inertial sensors

Fingerprint

Dive into the research topics of 'Using wearable inertial sensors for posture and position tracking in unconstrained environments through learned translation manifolds'. Together they form a unique fingerprint.

Cite this