Edinburgh Research Explorer

Multi-Scale Octave Convolutions for Robust Speech Recognition

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publication ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages7019-7023
Number of pages5
ISBN (Electronic)978-1-5090-6631-5
ISBN (Print)978-1-5090-6632-2
DOIs
Publication statusPublished - 14 May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing - Barcelona, Spain
Duration: 4 May 20208 May 2020
Conference number: 45

Publication series

Name
PublisherIEEE
ISSN (Print)1520-6149
ISSN (Electronic)2379-190X

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP 2020
CountrySpain
CityBarcelona
Period4/05/208/05/20

Abstract

We propose a multi-scale octave convolution layer to learn robust speech representations efficiently. Octave convolutions were introduced by Chen et al [1] in the computer vision field to reduce the spatial redundancy of the feature maps by decomposing the output of a convolutional layer into feature maps at two different spatial resolutions, one octave apart. This approach improved the efficiency as well as the accuracy of the CNN models. The accuracy gain was attributed to the enlargement of the receptive field in the original input space. We argue that octave convolutions likewise improve the robustness of learned representations due to the use of average pooling in the lower resolution group, acting as a low-pass filter. We test this hypothesis by evaluating on two noisy speech corpora – Aurora-4 and AMI. We extend the octave convolution concept to multiple resolution groups and multiple octaves. To evaluate the robustness of the inferred representations, we report the similarity between clean and noisy encodings using an affine projection loss as a proxy robustness measure. The results show that proposed method reduces the WER by up to 6.6% relative for Aurora-4 and 3.6% for AMI, while improving the computational efficiency of the CNN acoustic models.

Event

2020 IEEE International Conference on Acoustics, Speech, and Signal Processing

4/05/208/05/20

Barcelona, Spain

Event: Conference

Download statistics

No data available

ID: 137076919