A Deep Learning Framework for Character Motion Synthesis and Editing

Daniel Holden, Jun Saito, Taku Komura

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

We present a framework to synthesize character movements based on high level parameters, such that the produced movements respec t he manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold.Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposingkinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.
Original languageEnglish
Article number138
Number of pages11
JournalACM Transactions on Graphics
Volume35
Issue number4
DOIs
Publication statusPublished - 11 Jul 2016

Fingerprint

Dive into the research topics of 'A Deep Learning Framework for Character Motion Synthesis and Editing'. Together they form a unique fingerprint.

Cite this