Deep multimodal autoencoders for identifying latent representations of spike counts and local field potentials

Arno Onken, Josue Yague, Shuzo Sakata

Research output: Contribution to conferencePosterpeer-review

Abstract / Description of output

Advances in recording techniques lead to datasets of neural activity with ever increasing dimensionality and complexity. This trend calls for better analysis techniques to identify low-dimensional latent structure that can provide better insights into neural processing. Recently, various forms of deep autoencoders achieved remarkable successes in the identification of latent representations from complex high-dimensional data [1-3]. Here, we apply deep autoencoders to model spontaneous neural activity simultaneously recorded from the basal forebrain (BF) and the auditory cortex (AC) of mice [4]. In particular, we explore the benefits of multimodality for reconstruction performance and for latent space representations. We train deep multimodal autoencoders with early modality fusion to find low dimensional representations of spike counts and local field potentials on a timescale of seconds. For the different modalities, we use distinct loss functions implicitly modeling particular statistical distributions. We compare the held-out reconstruction performance after training on the original data and after training on data where we shuffled the samples of one modality but not the other, thereby destroying information that one modality carries about the other. We refer to the latter as unimodal reconstruction performance. Furthermore, we compare the latent representations of BF and AC activities. We find that for the AC, multimodal reconstruction performance is significantly greater than the unimodal one whereas for the BF, the reconstruction performance does not benefit from a joint multimodal representation. Our results suggest differences in the BF and AC latent spaces that give rise to the observed spike counts and local field potentials. They further demonstrate that deep autoencoders are useful and versatile tools for identifying low-dimensional multimodal neural representations.
Original languageEnglish
DOIs
Publication statusPublished - 27 Sept 2018
EventBernstein Conference 2018 - Berlin, Germany
Duration: 25 Sept 201828 Sept 2018
https://abstracts.g-node.org/conference/BC18

Conference

ConferenceBernstein Conference 2018
Abbreviated titleBS 2018
Country/TerritoryGermany
CityBerlin
Period25/09/1828/09/18
Internet address

Fingerprint

Dive into the research topics of 'Deep multimodal autoencoders for identifying latent representations of spike counts and local field potentials'. Together they form a unique fingerprint.

Cite this