Projects per year
Abstract / Description of output
This paper introduces a continuous system capable of automatically producing the most adequate speaking style to synthesize a desired target text. This is done thanks to a joint modeling of the acoustic and lexical parameters of the speaker models by adapting the CVSM projection of the training texts using MR-HMM techniques. As such, we consider that as long as sufficient variety in the training data is available, we should be able to model a continuous lexical space into a continuous acoustic space. The proposed continuous automatic text to speech system was evaluated by means of a perceptual evaluation in order to compare them with traditional approaches to the task. The system proved to be capable of conveying the correct expressiveness (average adequacy of 3.6) with an expressive strength comparable to oracle traditional expressive speech synthesis (average of 3.6) although with a drop in speech quality mainly due to the semi-continuous nature of the data (average quality of 2.9). This means that the proposed system is capable of improving traditional neutral systems without requiring any additional user interaction.
Original language | English |
---|---|
Title of host publication | Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 369-376 |
Number of pages | 8 |
ISBN (Print) | 978-4-87974-702-0 |
Publication status | Published - 16 Dec 2016 |
Event | 26th International Conference on Computational Linguistics - Osaka, Japan Duration: 11 Dec 2016 → 16 Dec 2016 http://coling2016.anlp.jp/ |
Conference
Conference | 26th International Conference on Computational Linguistics |
---|---|
Abbreviated title | COLING 2016 |
Country/Territory | Japan |
City | Osaka |
Period | 11/12/16 → 16/12/16 |
Internet address |
Fingerprint
Dive into the research topics of 'Continuous Expressive Speaking Styles Synthesis based on CVSM and MR-HMM'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Simple4All: Speech synthesis that improves through adaptive learning
1/11/11 → 31/10/14
Project: Research
File