Emotion transplantation through adaptation in HMM-based speech synthesis

Jaime Lorenzo-Trueba, Roberto Barra-Chicote, Rubén San-Segundo, Javier Ferreiros, Junichi Yamagishi, Juan M. Montero

Research output: Contribution to journalArticlepeer-review

Abstract

Abstract This paper proposes an emotion transplantation method capable of modifying a synthetic speech model through the use of CSMAPLR adaptation in order to incorporate emotional information learned from a different speaker model while maintaining the identity of the original speaker as much as possible. The proposed method relies on learning both emotional and speaker identity information by means of their adaptation function from an average voice model, and combining them into a single cascade transform capable of imbuing the desired emotion into the target speaker. This method is then applied to the task of transplanting four emotions (anger, happiness, sadness and surprise) into 3 male speakers and 3 female speakers and evaluated in a number of perceptual tests. The results of the evaluations show how the perceived naturalness for emotional text significantly favors the use of the proposed transplanted emotional speech synthesis when compared to traditional neutral speech synthesis, evidenced by a big increase in the perceived emotional strength of the synthesized utterances at a slight cost in speech quality. A final evaluation with a robotic laboratory assistant application shows how by using emotional speech we can significantly increase the students’ satisfaction with the dialog system, proving how the proposed emotion transplantation system provides benefits in real applications.
Original languageEnglish
Pages (from-to)292-307
Number of pages16
JournalComputer Speech and Language
Volume34
Issue number1
DOIs
Publication statusPublished - Nov 2015

Keywords

  • Emotion transplantation

Fingerprint

Dive into the research topics of 'Emotion transplantation through adaptation in HMM-based speech synthesis'. Together they form a unique fingerprint.

Cite this