Speaker adaptive training using model agnostic meta-learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Speaker adaptive training (SAT) of neural network acoustic models learns models in a way that makes them more suitable for adaptation to test conditions. Conventionally, model-based speaker adaptive training is performed by having a set of speaker dependent parameters that are jointly optimised with speaker independent parameters in order to remove speaker variation. However, this does not scale well if all neural network weights are to be adapted to the speaker. In this paper we formulate speaker adaptive training as a meta-learning task, in which an adaptation process using gradient descent is encoded directly into the training of the model. We compare our approach with test-only adaptation of a standard baseline model and a SAT-LHUC model with a learned speaker adaptation schedule and demonstrate that the meta-learning approach achieves comparable results.
Original languageEnglish
Title of host publication2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages8
ISBN (Electronic)978-1-7281-0306-8
ISBN (Print)978-1-7281-0307-5
Publication statusPublished - 20 Feb 2020
EventIEEE Automatic Speech Recognition and Understanding Workshop 2019 - Sentosa, Singapore
Duration: 14 Dec 201918 Dec 2019


ConferenceIEEE Automatic Speech Recognition and Understanding Workshop 2019
Abbreviated titleASRU 2019
Internet address


  • speaker adaptation
  • speaker adaptive training
  • model-agnostic meta-learning


Dive into the research topics of 'Speaker adaptive training using model agnostic meta-learning'. Together they form a unique fingerprint.

Cite this