Missing Data in Kernel PCA

Guido Sanguinetti, Neil D. Lawrence

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Kernel Principal Component Analysis (KPCA) is a widely used technique for visualisation and feature extraction. Despite its success and flexibility, the lack of a probabilistic interpretation means that some problems, such as handling missing or corrupted data, are very hard to deal with. In this paper we exploit the probabilistic interpretation of linear PCA together with recent results on latent variable models in Gaussian Processes in order to introduce an objective function for KPCA. This in turn allows a principled approach to the missing data problem. Furthermore, this new approach can be extended to reconstruct corrupted test data using fixed kernel feature extractors. The experimental results show strong improvements over widely used heuristics.
Original languageEnglish
Title of host publicationMachine Learning: ECML 2006
Subtitle of host publication17th European Conference on Machine Learning Berlin, Germany, September 18-22, 2006 Proceedings
EditorsJohannes Fürnkranz, Tobias Scheffer, Myra Spiliopoulou
PublisherSpringer
Pages751-758
Number of pages8
ISBN (Electronic)978-3-540-46056-5
ISBN (Print)978-3-540-45375-8
DOIs
Publication statusPublished - 2006

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin Heidelberg
Volume4212
ISSN (Print)0302-9743

Fingerprint

Dive into the research topics of 'Missing Data in Kernel PCA'. Together they form a unique fingerprint.

Cite this