MetaAudio: A Few-Shot Audio Classification Benchmark

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Currently available benchmarks for few-shot learning (machine learning with few training examples) are limited in the domains they cover, primarily focusing on image classification. This work aims to alleviate this reliance on image-based benchmarks by offering the first comprehensive, public and fully reproducible audio based alternative, covering a variety of sound domains and experimental settings. We compare the few-shot classification performance of a variety of techniques on seven audio datasets (spanning environmental sounds to human-speech). Extending this, we carry out in-depth analyses of joint training (where all datasets are used during training) and cross-dataset adaptation protocols, establishing the possibility of a generalised audio few-shot classification algorithm. Our experimentation shows gradient-based meta-learning methods such as MAML and Meta-Curvature consistently outperform both metric and baseline methods. We also demonstrate that the joint training routine helps overall generalisation for the environmental sound databases included, as well as being a somewhat-effective method of tackling the cross-dataset/domain setting.
Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2022
Subtitle of host publication31st International Conference on Artificial Neural Networks, Bristol, UK, September 6–9, 2022, Proceedings, Part I
PublisherSpringer
Pages219–230
Number of pages12
Volume13529
ISBN (Electronic)978-3-031-15919-0
ISBN (Print)978-3-031-15918-3
DOIs
Publication statusPublished - 7 Sept 2022

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume13529
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'MetaAudio: A Few-Shot Audio Classification Benchmark'. Together they form a unique fingerprint.

Cite this