Investigating different representations for modeling multiple emotions in DNN-based speech synthesis

Jaime Lorenzo-Trueba, Gustav Eje Henter, Shinji Takahashi, Junichi Yamagishi, Yosuke Morino, Yuta Ochiai

Research output: Contribution to conferencePaperpeer-review

Abstract

This paper investigates simultaneous modeling of multiple emotions in DNN-based expressive speech synthesis, and how to represent the emotional labels, such as emotional class and strength, for this task. Our goal is to answer two questions: First, what is the best way to annotate speech data with multiple emotions – should we use the labels that the speaker intended to express, or labels based on listener perception of the resulting speech signals? Second, how should the emotional information be represented as labels for supervised DNN training, e.g., should emotional class and emotional strength be factorized into separate inputs or not? We evaluate on a large-scale corpus of emotional speech from a professional actress, additionally annotated with perceived emotional labels from crowdsourced listeners. By comparing DNN-based speech synthesizers that utilize different emotional representations, we assess the impact of these representations and design decisions on human emotion recognition rates and perceived emotional strength.
Index Terms: Emotional speech synthesis, deep neural network, recurrent neural networks
Original languageEnglish
Number of pages6
Publication statusAccepted/In press - 18 Jul 2017
EventThe 3rd International Workshop on The Affective Social Multimedia Computing 2017 - Stockholm, Sweden
Duration: 25 Aug 201725 Aug 2017
http://www.nwpu-aslp.org/asmmc2017/

Conference

ConferenceThe 3rd International Workshop on The Affective Social Multimedia Computing 2017
Abbreviated titleASMMC2017
Country/TerritorySweden
CityStockholm
Period25/08/1725/08/17
Internet address

Fingerprint

Dive into the research topics of 'Investigating different representations for modeling multiple emotions in DNN-based speech synthesis'. Together they form a unique fingerprint.

Cite this