Edinburgh Research Explorer

Coding and decoding from neural populations representing multiple stimuli

Research output: Contribution to conferencePoster

Original languageEnglish
Publication statusPublished - 2014
EventAREADNE 2014 Research in Encoding And Decoding of Neural Ensembles - Nomikos Conference Centre, Santorini, Greece
Duration: 25 Jun 201429 Jun 2014


ConferenceAREADNE 2014 Research in Encoding And Decoding of Neural Ensembles


Most theoretical neural decoding studies assume populations of neurons tuned to single stimuli, such as grating orientation or motion direction. For such stimuli, maximum likelihood decoding in particular has been shown to be a robust and in many cases optimal decoder. However, in reality neurons are rarely affected by a single stimulus alone, and experience a wide range of contextual effects when more complex stimuli are presented. They thus code for
more than just one stimulus. Here we consider the influence of this on the maximum likelihood decoder.
First, we consider how a center and surround grating pair are encoded in a primary visual cortex population. It is well known that surround gratings strongly modulate neuronal responses to the center. Furthermore, a recent study showed that the modulation is center-dependent for many neurons, being strongest when the center and surround are co-aligned, regardless of
a neuron’s preferred orientation (Shushruthet al., 2012). We set up two simple phenomenological population models implementing these features, one center dependent and one center independent. We then decode the center and surround orientations using a maximum likelihood decoder. Although the decoder has no bias in the absence of noise, in the presence of noise
we find that (i ) there is always a strong bias in the surround estimation, (ii ) there is a strong bias in the center estimation for the center independent model, and (iii ) there is no bias in the center estimation for the center dependent model.
Secondly, we study how to decode the orientations of two superimposed gratings. Although not fully understood, the responses to two superimposed stimuli do not seem to combine additively. Rather, the response to simultaneous presentation of both stimuli equals either the maximum of the two individual stimulus responses (e.g., in V1, Lampl et al., 2004), or a mean
response (e.g., two motion directions in MT, Van Wezel et al., 1996). We again set up two simple phenomenological models implementing these features, and tried to decode both presented orientations. When the neurons respond up to a maximum, the decoder is mostly unbiased. However, for the mean response the decoder is not able to accurately estimate either stimulus, experiencing a bias dependent on the orientation difference.
In conclusion, we show that in the presence of noise a maximum likelihood decoder portrays significant biases for both center-surround and plaid stimuli. Thus decoding slightly more complicated stimuli from even relatively simple models is non-trivial. In the case of V1 coding, center dependent surround modulation allows for unbiased decoding of the center stimulus, which suggests a functional reason for the observed surround tuning.
1. Shushruth, et al., 2012, J. Neurosci., 32(1):308–321
2. Lampl, et al., 2004, J. Neurophysiol., 92:2704–2713
3. van Wezel, et al., 1996, Vision Res. 36:2805–2813


AREADNE 2014 Research in Encoding And Decoding of Neural Ensembles


Santorini, Greece

Event: Conference

ID: 18017529