## Abstract

Optimal multisensory integration requires that each input be weighted by the inverse of its uncertainty, so as to favor the more reliable inputs. In most models, the uncertainty comes from Gaussian noise corrupting the sensory evidence. In real life, however, uncertainty often arises from the difficulty of segmenting sensory signals from their background. For instance, the high level of uncertainty we experience when listening to a person speaking at a loud party comes mostly from the complexity of extracting the speech from the background chatting. The neural mechanisms that allow us to represent segmentation uncertainty and use it for optimal multisensory integration is currently unknown.

In laboratory settings, segmentation uncertainty can be induced with a stimulus composed of two sets of dots: one set moving coherently to mimic the spatial pattern of motion generated by self-motion, and the other moving randomly. Previous studies have shown that humans and monkeys can extract their direction of self-motion from such stimuli and can combine this information with vestibular signals near-optimally. Here we show that, at the neural level, this near-optimal integration can be implemented by combining the visual and vestibular neural patterns of activity with a nonlinear function known as divisive normalization. This nonlinear function can be linearized for a fixed level of coherence (the percentage of dots moving coherently), yielding a linear combination rule in which the weights assigned to the modalities are coherence dependent. This is consistent with single cell recordings in area MSTd which have revealed that neurons use such near-optimal coherence dependent weights (Fetsch et al., Nat Neurosci 2011).

Divisive normalization has been reported in several cortical and subcortical areas, and has been shown to account for a variety of multisensory interactions (Ohshiro et al., Nat Neurosci 2011). Our results indicate that a variation of this nonlinearity can be tuned to be statistically near-optimal.

In laboratory settings, segmentation uncertainty can be induced with a stimulus composed of two sets of dots: one set moving coherently to mimic the spatial pattern of motion generated by self-motion, and the other moving randomly. Previous studies have shown that humans and monkeys can extract their direction of self-motion from such stimuli and can combine this information with vestibular signals near-optimally. Here we show that, at the neural level, this near-optimal integration can be implemented by combining the visual and vestibular neural patterns of activity with a nonlinear function known as divisive normalization. This nonlinear function can be linearized for a fixed level of coherence (the percentage of dots moving coherently), yielding a linear combination rule in which the weights assigned to the modalities are coherence dependent. This is consistent with single cell recordings in area MSTd which have revealed that neurons use such near-optimal coherence dependent weights (Fetsch et al., Nat Neurosci 2011).

Divisive normalization has been reported in several cortical and subcortical areas, and has been shown to account for a variety of multisensory interactions (Ohshiro et al., Nat Neurosci 2011). Our results indicate that a variation of this nonlinearity can be tuned to be statistically near-optimal.

Original language | English |
---|---|

Number of pages | 1 |

Publication status | Published - 16 Oct 2012 |

Event | Society for Neuroscience Annual Meeting 2012 - New Orleans Duration: 13 Oct 2012 → 17 Oct 2012 |

### Conference

Conference | Society for Neuroscience Annual Meeting 2012 |
---|---|

City | New Orleans |

Period | 13/10/12 → 17/10/12 |