Abstract / Description of output
We present a fast, efficient technique for performing neural style transfer of human motion data using a feedforward neural network. Typically feedforward neural networks are trained in a supervised fashion - both specifying the input and desired output simultaneously. For tasks such as style transfer this data may not always be available and so a different training method is required. We present a method of training a feedforward neural network making using of a loss network - in this case a convolutional autoencoder trained on a large motion database. This loss network is used to evaluate a number of separate error terms used in training the feedforward neural network. We compute a loss function in the space of the hidden units of the loss network that is based on style difference and motion-specific constraints such as foot sliding, joint lengths, and the trajectory of the character. By back-propagating these errors into the feedforward network we can train it to perform a transformation equivalent to neural style transfer. Using our framework we can transform the style of motion thousands of times faster than previous approaches which use optimization. We demonstrate our system by transforming locomotion into various different styles.
Original language | English |
---|---|
Pages (from-to) | 42-49 |
Number of pages | 8 |
Journal | IEEE Computer Graphics and Applications |
Volume | 37 |
Issue number | 4 |
DOIs | |
Publication status | Published - 21 Aug 2017 |
Keywords / Materials (for Non-textual outputs)
- motion capture
- deep learning
- style transfer
- machine learning