Abstract / Description of output
We propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far into the future due to the inherent ambiguity, we argue that a user-provided control signal is desirable for animators and greatly reduces the predictive error for long sequences. Thus, we formulate a framework which explicitly introduces an encoding of control signals into a variational inference framework trained to learn the manifold of human motion. As part of this framework, we formulate a prior on the latent space, which allows us to generate high-quality motion without providing frames from an existing sequence. We further model the sequential nature of the task by combining samples from a variational approximation to the intractable posterior with the control signal through a recurrent neural network (RNN) that synthesizes the motion. We show that our system can predict the movements of the human body over long horizons more accurately than state-of-the art methods. Finally, the design of our system considers practical use cases and thus provides a competitive approach to motion synthesis.
Original language | English |
---|---|
Title of host publication | The 28th British Machine Vision Conference (BMVC 2017) |
Number of pages | 13 |
ISBN (Electronic) | 1-901725-60-X |
DOIs | |
Publication status | E-pub ahead of print - 7 Sept 2017 |
Event | The 28th British Machine Vision Conference - Imperial College London, London, United Kingdom Duration: 4 Sept 2017 → 7 Sept 2017 https://bmvc2017.london/ |
Conference
Conference | The 28th British Machine Vision Conference |
---|---|
Abbreviated title | BMVC 2017 |
Country/Territory | United Kingdom |
City | London |
Period | 4/09/17 → 7/09/17 |
Internet address |