Blending LSTMs into CNNs

Krzysztof J. Geras, Abdel-rahman Mohamed, Rich Caruana, Gregor Urban, Shengjie Wang, Özlem Aslan, Matthai Philipose, Matthew Richardson, Charles Sutton

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider whether deep convolutional networks (CNNs) can represent decision
functions with similar accuracy as recurrent networks such as LSTMs. First,
we show that a deep CNN with an architecture inspired by the models recently
introduced in image recognition can yield better accuracy than previous convolutional and LSTM networks on the standard 309h Switchboard automatic speech recognition task. Then we show that even more accurate CNNs can be trained under the guidance of LSTMs using a variant of model compression, which we call model blending because the teacher and student models are similar in complexity but different in inductive bias. Blending further improves the accuracy of our CNN, yielding a computationally efficient model of accuracy higher than any of the other individual models. Examining the effect of “dark knowledge” in this model compression task, we find that less than 1% of the highest probability labels are needed for accurate model compression.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations (ICLR Workshop)
Number of pages13
Publication statusAccepted/In press - 4 Feb 2016
Event4th International Conference on Learning Representations - San Juan, Puerto Rico
Duration: 2 May 20164 May 2016
https://iclr.cc/archive/www/doku.php%3Fid=iclr2016:main.html

Conference

Conference4th International Conference on Learning Representations
Abbreviated titleICLR 2016
Country/TerritoryPuerto Rico
CitySan Juan
Period2/05/164/05/16
Internet address

Fingerprint

Dive into the research topics of 'Blending LSTMs into CNNs'. Together they form a unique fingerprint.

Cite this