Bayesian Kernel Shaping for Learning Control

Jo-Anne Ting, Mrinal Kalakrishnan, Sethu Vijayakumar, Stefan Schaal

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In kernel-based regression learning, optimizing each kernel individually is useful when the data density, curvature of regression surfaces (or decision boundaries) or magnitude of output noise varies spatially. Previous work has suggested gradient descent techniques or complex statistical hypothesis methods for local kernel shaping, typically requiring some amount of manual tuning of meta parameters.We introduce a Bayesian formulation of nonparametric regression that, with the help of variational approximations, results in an EM-like algorithm for simultaneous estimation of regression and kernel parameters. The algorithm is computationally efficient, requires no sampling, automatically rejects outliers and has only one prior to be specified. It can be used for nonparametric regression with local polynomials or as a novel method to achieve nonstationary regression with Gaussian processes. Our methods are particularly useful for learning control, where reliable estimation of local tangent planes is essential for adaptive controllers and reinforcement learning. We evaluate our methods on several synthetic data sets and on an actual robot which learns a task-level control law.
Original languageEnglish
Title of host publicationProc. Advances in Neural Information Processing Systems (NIPS '08)
Number of pages8
Publication statusPublished - 2008

Fingerprint

Dive into the research topics of 'Bayesian Kernel Shaping for Learning Control'. Together they form a unique fingerprint.

Cite this