Abstract / Description of output
Recent studies have pointed out that the neural code may use multiplexing to encode unique information at different temporal scales of spike trains [1,2]. However, how to mathematically separate out the different components of a neural code and to identify the unique contribution of each time scale to sensory coding and behaviour has remained an open challenge.
Here we investigated this problem by analytically deriving the information loss caused by adding to the spike times a random jitter uniformly distributed within a given time range whose size is varied parametrically. By increasing the size of the jitter range we destroy gradually and in a controlled manner the contribution of the different temporal scales, from fine to coarse. Therefore, the derivative of the information with respect to the jitter represents the non-redundant information that each temporal scale conveys about the stimulus, and provides a way to easily quantify and summarize in one single figure the encoding strategy used by each individual neuron to temporally encode a particular set of stimuli.
To implement this approach, which we termed the Information Derivative (IDE) method, we approximated the information derivative with respect to the jitter as follows: 1) we analytically derive the average firing rates associated with an arbitrary jitter size; 2) we use these analytical firing rates to compute the information contained in the jittered neural response, assuming no noise correlations. We tested the IDE method with simulated data in which several temporal scales are relevant for discriminating the presented stimuli and confirm the method is able to detect the contributing temporal scales and to assign the correct relative contribution to them.
The IDE method we propose allows inferring the contribution of different temporal scales to the information contained in the neural responses about a given set of stimuli. The method thus provides a way of studying in detail the information processing capabilities of a multiplexed neural code.
Here we investigated this problem by analytically deriving the information loss caused by adding to the spike times a random jitter uniformly distributed within a given time range whose size is varied parametrically. By increasing the size of the jitter range we destroy gradually and in a controlled manner the contribution of the different temporal scales, from fine to coarse. Therefore, the derivative of the information with respect to the jitter represents the non-redundant information that each temporal scale conveys about the stimulus, and provides a way to easily quantify and summarize in one single figure the encoding strategy used by each individual neuron to temporally encode a particular set of stimuli.
To implement this approach, which we termed the Information Derivative (IDE) method, we approximated the information derivative with respect to the jitter as follows: 1) we analytically derive the average firing rates associated with an arbitrary jitter size; 2) we use these analytical firing rates to compute the information contained in the jittered neural response, assuming no noise correlations. We tested the IDE method with simulated data in which several temporal scales are relevant for discriminating the presented stimuli and confirm the method is able to detect the contributing temporal scales and to assign the correct relative contribution to them.
The IDE method we propose allows inferring the contribution of different temporal scales to the information contained in the neural responses about a given set of stimuli. The method thus provides a way of studying in detail the information processing capabilities of a multiplexed neural code.
Original language | English |
---|---|
Number of pages | 1 |
DOIs | |
Publication status | Published - 23 Sept 2016 |
Event | Bernstein Conference 2016 - Duration: 21 Sept 2016 → 23 Sept 2016 |
Conference
Conference | Bernstein Conference 2016 |
---|---|
Period | 21/09/16 → 23/09/16 |