Abstract / Description of output
This paper presents an approach for body-motion estimation from audio-speech waveform, where context information in both input and output streams is taken in to account without using recurrent models. Previous works commonly use multiple frames of input to estimate one frame of motion data, where the temporal information of the generated motion is little considered. To resolve the problems, we extend our previous work and propose a system, double deep canonical-correlation-constrained autoencoder (D-DCCCAE), which encodes each of speech and motion segments into fixed-length embedded features that are well correlated with the segments of the other modality. The learnt motion embedded feature is estimated from the learnt speech-embedded feature through a simple neural network and further decoded back to the sequential motion. The proposed pair of embedded features showed higher correlation than spectral features with motion data, and our model was more preferred than the baseline model (BA) in terms of human-likeness and comparable in terms of similar appropriateness.
Original language | English |
---|---|
Title of host publication | ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 900-904 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-7281-7605-5 |
ISBN (Print) | 978-1-7281-7606-2 |
DOIs | |
Publication status | Published - 13 May 2021 |
Event | 46th IEEE International Conference on Acoustics, Speech and Signal Processing - Toronto, Canada Duration: 6 Jun 2021 → 11 Jun 2021 https://2021.ieeeicassp.org/ |
Publication series
Name | |
---|---|
ISSN (Print) | 1520-6149 |
ISSN (Electronic) | 2379-190X |
Conference
Conference | 46th IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Abbreviated title | ICASSP 2021 |
Country/Territory | Canada |
City | Toronto |
Period | 6/06/21 → 11/06/21 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- neural networks
- speech
- body motion
- conversational virtual agent