Edinburgh Research Explorer

Dr Junichi Yamagishi

Senior Research Fellow

Profile photo

Willingness to take Ph.D. students: Yes

Research Interests

Dr. Junichi Yamagishi’s research interests are in speech information processing, where he has mainly concentrated on statistical speech synthesis. Yamagishi’s research ideas bridge the gaps between speech processing and other research fields, including machine learning, linguistics and speech production. Yamagishi ultimately aims to place speech synthesis on a more scientific basis and create commercially and socially useful speech technologies.

Qualifications

2003-2006PhD, Tokyo Institute of Technology
 

Thesis title: “Average-voice speech synthesis”                   

 

Awarded Japan Tejima PhD Thesis Award

2002-2003

MEng, Tokyo Institute of Technology, Information processing

1998-2002

BEng, Tokyo Institute of Technology, Computer science

Biography

Dr. Junichi Yamagishi received MEng in Information Processing from Tokyo Institute of Technology. His PhD, also received from Tokyo Institute of Technology, was awarded Japan Tejima PhD Thesis Award, given to the best PhD thesis each year at the Tokyo Institute of Technology. He was appointed Research Fellow for the Japan Society for Promotion of Science between 2004 and 2007. Then he became a Research Fellow for the School of Informatics University of Edinburgh in 2007 until 2013. Currently, he is an EPSRC Career Acceleration Fellow at the School of Informatics, University of Edinburgh. He is also Associate Professor at the Digital Content and Media Science Research Division at the National Institute of Informatics, Japan, as well as Visiting Associate Professor at the Nagoya Institute of technology, Japan.
 
Dr. Yamagishi has been principal investigator for various high profile projects: EMC donation funded “Voice reconstruction”; The Royal Society of Edinburgh, RSE-NSFC (China) joint funded “Unified articulatory-acoustic modelling for flexible and controllable speech synthesis”; MRC Confidence in Concept Edinburgh Partners Devices and Diagnostic awards “Delivering personalised voices for patients with MND”.
 
He is currently principal investigator for: EPSRC CAF funded “Deep architectures for statistical speech synthesis”; CREST and JST funded “User-generated dialogue systems: uDialogue”, Swiss National Science Foundation Sinergia programme awards “SIWIS: Spoken Interaction with Interpretation in Switzerland”; Sparkling Science BMWF (Austria) awarded “Speech synthesis for the blind, Apraschsunthese von Auditiven Lehrbüchern für Blinde SchülerInnen (SALB)”
 
Through these projects:
·         Dr. Yamagishi developed a new algorithm enabling large number of different synthetic voices to be easily constructed using small amounts of speech data, resulting in HTS, an open source toolkit for statistical speech synthesis. HTS is used worldwide by both academic and commercial organisations such as Microsoft, Nuance, Toshipa, Pentax and Google (Android speech synthesizer is based on HTS toolkit)
·         Dr. Yamagishi’s synthesised speech was found to be as intelligible as human speech in the Blizzard challenge; the first time synthesised speech achieved this landmark result.
·         Dr. Yamagishi’s adaptive speech synthesis has been used to create dialectical and child speech using very small amounts of data. It has been successfully applied to clinical voice banking and personalised voice reconstruction for patients who have lost or are losing their voices due to conditions such as motor neurone disease or Parkinson’s disease.

ID: 23299