The effect of the input density distribution on kernel-based classifiers

Christopher Williams, Matthias Seeger

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

The eigenfunction expansion of a kernel function K(x, y) as used in support vector machines or Gaussian process predictors is studied when the input data is drawn from a distribution p(x). In this case it is shown that the eigenfunctions f i g obey the equation K(x, y)p(x) i (x)dx = i i (y). This has a number of consequences including (i) the eigenvalues/vectors of the n × n Gram matrix K obtained by evaluating the kernel at all pairs of training points K(x i , x j ) converge to the eigenvalues and eigenfunctions of the integral equation above as n ! 1 and (ii) the dependence of the eigenfunctions on p(x) may be useful for the class-discrimination task. We show that on a number of datasets using the RBF kernel the eigenvalue spectrum of the Gram matrix decays rapidly, and discuss how this property might be used to speed up kernel-based predictors.
Original languageEnglish
Title of host publicationICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
PublisherMorgan Kaufmann Publishers Inc.
Pages1159-1166
Number of pages8
Publication statusPublished - 2000

Fingerprint

Dive into the research topics of 'The effect of the input density distribution on kernel-based classifiers'. Together they form a unique fingerprint.

Cite this