Strictly positive-definite spike train kernels for point-process divergences

Il Memming Park, Sohan Seth, Murali Rao, José C Principe

Research output: Contribution to journalArticlepeer-review


Exploratory tools that are sensitive to arbitrary statistical variations in spike train observations open up the possibility of novel neuroscientific discoveries. Developing such tools, however, is difficult due to the lack of Euclidean structure of the spike train space, and an experimenter usually prefers simpler tools that capture only limited statistical features of the spike train, such as mean spike count or mean firing rate. We explore strictly positive-definite kernels on the space of spike trains to offer both a structural representation of this space and a platform for developing statistical measures that explore features beyond count or rate. We apply these kernels to construct measures of divergence between two point processes and use them for hypothesis testing, that is, to observe if two sets of spike trains originate from the same underlying probability law. Although there exist positive-definite spike train kernels in the literature, we establish that these kernels are not strictly definite and thus do not induce measures of divergence. We discuss the properties of both of these existing nonstrict kernels and the novel strict kernels in terms of their computational complexity, choice of free parameters, and performance on both synthetic and real data through kernel principal component analysis and hypothesis testing.
Original languageEnglish
Pages (from-to)2223-2250
Number of pages28
JournalNeural Computation
Issue number8
Publication statusPublished - Aug 2012


Dive into the research topics of 'Strictly positive-definite spike train kernels for point-process divergences'. Together they form a unique fingerprint.

Cite this