Analyzing deep CNN-based utterance embeddings for acoustic model adaptation

Joanna Równicka, Peter Bell, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution


We explore why deep convolutional neural networks (CNNs) with small two-dimensional kernels, primarily used for modeling spatial relations in images, are also effective in speech recognition. We analyze the representations learned by deep CNNs and compare them with deep neural network (DNN) representations and i-vectors, in the context of acoustic model adaptation. To explore whether interpretable information can be decoded from the learned representations we evaluate their ability to discriminate between speakers, acoustic conditions, noise type, and gender using the Aurora-4 dataset. We extract both whole model embeddings (to capture the information learned across the whole network) and layer-specific embeddings which enable understanding of the flow of information across the network. We also use learned representations as the additional input for a time-delay neural network (TDNN) for the Aurora-4 and MGB-3 English datasets. We find that deep CNN embeddings outperform DNN embeddings for acoustic model adaptation and auxiliary features based on deep CNN embeddings result in similar word error rates to i-vectors.
Original languageEnglish
Title of host publication2018 IEEE Spoken Language Technology Workshop (SLT)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages7
ISBN (Electronic)978-1-5386-4334-1
ISBN (Print)978-1-5386-4335-8
Publication statusPublished - 14 Feb 2019
Event2018 IEEE Workshop on Spoken Language Technology (SLT) - Athens, Greece
Duration: 18 Dec 201821 Dec 2018


Conference2018 IEEE Workshop on Spoken Language Technology (SLT)
Abbreviated titleIEEE SLT 2018
Internet address


  • CNN embeddings
  • adaptation
  • utterance summary
  • i-vectors

Fingerprint Dive into the research topics of 'Analyzing deep CNN-based utterance embeddings for acoustic model adaptation'. Together they form a unique fingerprint.

Cite this