Edinburgh Research Explorer

Deep multimodal autoencoders for identifying latent representations of spike counts and local field potentials

Research output: Contribution to conferencePoster

Related Edinburgh Organisations

Open Access permissions

Open

Original languageEnglish
DOIs
Publication statusPublished - 27 Sep 2018
EventBernstein Conference 2018 - Berlin, Germany
Duration: 25 Sep 201828 Sep 2018
https://abstracts.g-node.org/conference/BC18

Conference

ConferenceBernstein Conference 2018
Abbreviated titleBS 2018
CountryGermany
CityBerlin
Period25/09/1828/09/18
Internet address

Abstract

Advances in recording techniques lead to datasets of neural activity with ever increasing dimensionality and complexity. This trend calls for better analysis techniques to identify low-dimensional latent structure that can provide better insights into neural processing. Recently, various forms of deep autoencoders achieved remarkable successes in the identification of latent representations from complex high-dimensional data [1-3]. Here, we apply deep autoencoders to model spontaneous neural activity simultaneously recorded from the basal forebrain (BF) and the auditory cortex (AC) of mice [4]. In particular, we explore the benefits of multimodality for reconstruction performance and for latent space representations. We train deep multimodal autoencoders with early modality fusion to find low dimensional representations of spike counts and local field potentials on a timescale of seconds. For the different modalities, we use distinct loss functions implicitly modeling particular statistical distributions. We compare the held-out reconstruction performance after training on the original data and after training on data where we shuffled the samples of one modality but not the other, thereby destroying information that one modality carries about the other. We refer to the latter as unimodal reconstruction performance. Furthermore, we compare the latent representations of BF and AC activities. We find that for the AC, multimodal reconstruction performance is significantly greater than the unimodal one whereas for the BF, the reconstruction performance does not benefit from a joint multimodal representation. Our results suggest differences in the BF and AC latent spaces that give rise to the observed spike counts and local field potentials. They further demonstrate that deep autoencoders are useful and versatile tools for identifying low-dimensional multimodal neural representations.

Event

Bernstein Conference 2018

25/09/1828/09/18

Berlin, Germany

Event: Conference

ID: 76047311