A lecture transcription system combining neural network acoustic and language models

P Bell, H Yamamoto, P Swietojanski, Y Wu, F McInnes, C Hori, S Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a new system for automatic transcription of lectures. The system combines a number of novel features, including deep neural network acoustic models using multi-level adaptive networks to incorporate out-of-domain information, and factored recurrent neural network language models. We demonstrate that the system achieves large improvements on the TED lecture transcription task from the 2012 IWSLT evaluation - our results are currently the best reported on this task, showing an relative WER reduction of more than 16% compared to the closest competing system from the evaluation.
Original languageEnglish
Title of host publicationIn Proc. Interspeech 2013
Publication statusPublished - 2013

Fingerprint Dive into the research topics of 'A lecture transcription system combining neural network acoustic and language models'. Together they form a unique fingerprint.

Cite this