Classification of vocalizations by recordings from the auditory midbrain

Dominika Lyzwa, Michael Herrmann

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

The temporal and spatial properties of responses to complex stimuli in the central nucleus of the inferior colliculus (ICC), the main converging station in the auditory midbrain, can provide evidence for coding principles in the auditory system and are relevant for the design of neuroprosthesis. We study responses from guinea pigs to a set of eleven species-specific vocalizations which show a wide range of spectral contents, envelope types, frequency and amplitude modulations. The envelopes of the acoustically presented stimuli are characterized as complex or periodic impulses and have various degrees of periodicity. The frequency content ranges from harmonic strucutres to broad spectral distributions. The data analyzed were multi-unit recordings taken simultaneously from 32 positions in the ICC of guinea pigs using a double shank electrode. Peristimulus time histograms (PSTHs) of the high dimensional recordings were classified by linear discriminant analysis in order to evaluate the spatial and temporal distribution of stimulus-related information without the assumption of a specific coding scheme. Neighboring neural populations respond in a similar manner and have highly correlated PSTHs. Combining responses from different positions improves the classification performance for distant postions which do not show correlation, but not for adjacent positions. Separability of responses along the tonotopic gradient of the ICC to vocalizations shows great variation according to their spectral properties. Low-frequency stimuli are found to be better separable in low characteristic frequency (CF) lamina than complex envelope or broad spectral stimuli. Some stimuli of the latter type (squeal, whistle) are nearly perfectly discriminated in multi-units with high CF whereas discrimination for low-frequency stimuli is less good (60 - 70 %) in this region, see Figure 1. For multi-units positioned in the mid-frequency range along the tonotopic axis all stimuli can be separated at a level similar to the result obtained in the low/high CF regions for the respective preferred stimuli.
Original languageEnglish
Pages (from-to)1-2
Number of pages2
JournalBMC Neuroscience
Issue number1
Publication statusPublished - 2012


Dive into the research topics of 'Classification of vocalizations by recordings from the auditory midbrain'. Together they form a unique fingerprint.

Cite this