Learning an Inverse Rig Mapping for Character Animation

Daniel Holden, Jun Saito, Taku Komura

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We propose a general, real-time solution to the inversion of the rig function - the function which maps animation data from a character’s rig to its skeleton. Animators design character movements in the space of an animation rig, and a lack of a general solution for mapping motions from the skeleton space to the rig space keeps the animators away from the state-of-the-art character animation methods, such as those seen in motion editing and synthesis. Our solution is to use non-linear regression on sparse example animation sequences constructed by the animators, to learn such a mapping offline. When new example motions are provided in the skeleton space, the learned mapping is used to estimate the rig space values that reproduce such a motion. In order to further improve the precision, we also learn the derivative of the mapping, such that the movements can be fine-tuned to exactly follow the given motion. We test and present our system through examples including full-body character models, facial models and deformable surfaces. With our system, animators have the freedom to attach any motion synthesis algorithms to an arbitrary rigging and animation pipeline, for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.
Original languageEnglish
Title of host publicationSCA '15 Proceedings of the 14th ACM SIGGRAPH / Eurographics Symposium on Computer Animation
Number of pages9
ISBN (Print)978-1-4503-3496-9
Publication statusPublished - 2015


Dive into the research topics of 'Learning an Inverse Rig Mapping for Character Animation'. Together they form a unique fingerprint.

Cite this