Segmental Recurrent Neural Networks for End-to-end Speech Recognition

Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We study the segmental recurrent neural network for end-to-end acoustic modelling. This model connects the segmental conditional random field (CRF) with a recurrent neural network (RNN) used for feature extraction. Compared to most previous CRF-based acoustic models, it does not rely on an external system to provide features or segmentation boundaries. Instead, this model marginalises out all the possible segmentations, and features are extracted from the RNN trained together with the segmental CRF. In essence, this model is self-contained and can be trained end-to-end. In this paper, we discuss practical training and decoding issues as well as the method to speed up the training in the context of speech recognition. We performed experiments on the TIMIT dataset. We achieved 17.3 phone error rate (PER) from the first-pass decoding --- the best reported result using CRFs, despite the fact that we only used a zeroth-order CRF and without using any language model.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2016
Place of PublicationSan Francisco, United States
Number of pages5
Publication statusPublished - 12 Sep 2016
EventInterspeech 2016 - San Francisco, United States
Duration: 8 Sep 201612 Sep 2016

Publication series

PublisherInternational Speech Communication Association
ISSN (Print)1990-9772


ConferenceInterspeech 2016
Country/TerritoryUnited States
CitySan Francisco
Internet address


Dive into the research topics of 'Segmental Recurrent Neural Networks for End-to-end Speech Recognition'. Together they form a unique fingerprint.

Cite this