Impacts of machine translation and speech synthesis on speech-to-speech translation

Kei Hashimoto, Junichi Yamagishi, William Byrne, Simon King, Keiichi Tokuda

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This paper analyzes the impacts of machine translation and speech synthesis on speech-to-speech translation systems. A typical speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques have been proposed for integration of speech recognition and machine translation. However, corresponding techniques have not yet been considered for speech synthesis. The focus of the current work is machine translation and speech synthesis, and we present a subjective evaluation designed to analyze their impact on speech-to-speech translation. The results of these analyses show that the naturalness and intelligibility of the synthesized speech are strongly affected by the fluency of the translated sentences. In addition, several features were found to correlate well with the average fluency of the translated sentences and the average naturalness of the synthesized speech.
Original languageEnglish
Pages (from-to)857-866
Number of pages10
JournalSpeech Communication
Volume54
Issue number7
DOIs
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Impacts of machine translation and speech synthesis on speech-to-speech translation'. Together they form a unique fingerprint.

Cite this