Abstract
Although the lower layers of a deep neural network learn features which are transferable across datasets, these layers are not transferable within the same dataset. That is, in general, freezing the trained feature extractor (the lower layers) and retraining the classifier (the upper layers) on the same datasetleads to worse performance. In this paper, for the first time, we show that the frozen classifier is transferable within the same dataset. We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers. We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks. The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
Original language | English |
---|---|
Title of host publication | 2021 IEEE International Conference on Acoustics, Speech and Signal Processing |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Number of pages | 5 |
Publication status | Accepted/In press - 30 Jan 2021 |
Event | 46th IEEE International Conference on Acoustics, Speech and Signal Processing - Toronto, Canada Duration: 6 Jun 2021 → 11 Jun 2021 https://2021.ieeeicassp.org/ |
Conference
Conference | 46th IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Abbreviated title | ICASSP 2021 |
Country | Canada |
City | Toronto |
Period | 6/06/21 → 11/06/21 |
Internet address |
Keywords
- top-down training
- layer-wise training
- general classifier
- speech recognition
- language model