Normal-to-Lombard Adaptation of Speech Synthesis Using Long Short-Term Memory Recurrent Neural Networks

Bajibabu Bollepalli, Lauri Juvela, Manu Airaksinen, Cassia Valentini Botinhao, Paavo Alku

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

In this article, three adaptation methods are compared based on how well they change the speaking style of a neural network based text-to-speech (TTS) voice. The speaking style conversion adopted here is from normal to Lombard speech. The selected adaptation methods are: auxiliary features (AF), learning hidden unit contribution (LHUC), and fine-tuning (FT). Furthermore, four state-of-the-art TTS vocoders are compared in the same context. The evaluated vocoders are: GlottHMM, GlottDNN, STRAIGHT, and pulse model in log-domain (PML). Objective and subjective evaluations were conducted to study the performance of both the adaptation methods and the vocoders. In the subjective evaluations, speaking style similarity and speech intelligibility were assessed. In addition to acoustic model adaptation, phoneme durations were also adapted from normal to Lombard with the FT adaptation method. In objective evaluations and speaking style similarity tests, we found that the FT method outperformed the other two adaptation methods. In speech intelligibility tests, we found that there were no significant differences between vocoders although the PML vocoder showed slightly better performance compared to the three other vocoders.
Original languageEnglish
Pages (from-to)64-75
Number of pages21
JournalSpeech Communication
Volume110
Early online date18 Apr 2019
DOIs
Publication statusPublished - 1 Jul 2019

Keywords / Materials (for Non-textual outputs)

  • Lombard
  • Auxiliary features
  • LHUC
  • Fine-tuning
  • LSTM
  • Adaptation
  • TTS

Fingerprint

Dive into the research topics of 'Normal-to-Lombard Adaptation of Speech Synthesis Using Long Short-Term Memory Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this