Abstract / Description of output
Fast and approximate nearest-neighbor search methods have recently become popular for scaling nonparameteric regression to more complex and high-dimensional applications. As an alternative to fast nearest neighbor search, training data can also be incorporated online into appropriate sufficient statistics and adaptive data structures, such that approximate nearestneighbor predictions can be accelerated by orders of magnitude by means of exploiting the compact representations of these sufficient statistics. This chapter describes such an approach for locally weighted regression with locally linear models. Initially, we focus on local dimensionality reduction techniques in order to scale locally weighted learning to domains with very high dimensional input data. The key issue here revolves around obtaining a statistically robust and computationally inexpensive estimation of local linear models in such large spaces, despite potential irrelevant and redundant inputs. We develop a local version of partial least squares regression that fulfills all of these requirements, and embed it in an incremental nonlinear regression algorithm that can be shown to work efficiently in a number of complex applications. In the second part of this chapter, we introduce a novel Bayesian formulation of partial least squares regression that converts our nonparmetric regression approach to a probabilistic formulation. Some of the heuristic components inherent in partial least squares can be eliminated with this new algorithm by means of efficient Bayesian regularization techniques. Evaluations are provided for all algorithms on various sythetic data sets and real-time learning examples with anthropomorphic robots and complex simulations.
Original language | English |
---|---|
Title of host publication | Nearest-Neighbor Methods in Learning and Vision |
Publisher | MIT Press |
Pages | 103-142 |
Number of pages | 40 |
ISBN (Print) | 9780262195478 |
Publication status | Published - 2006 |