A neurally-inspired musical instrument classification system based upon the sound onset

Michael Newton, Leslie Smith

Research output: Contribution to journalArticlepeer-review


Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system’s response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.
Original languageEnglish
Pages (from-to)4785-4798
JournalThe Journal of the Acoustical Society of America
Issue number6
Publication statusPublished - 1 Jun 2012


  • acoustic signal processing
  • time-domain analysis
  • neural nets
  • musical instruments
  • cepstral analysis

Fingerprint Dive into the research topics of 'A neurally-inspired musical instrument classification system based upon the sound onset'. Together they form a unique fingerprint.

Cite this