Geodesic Gaussian kernels for value function approximation

Masashi Sugiyama, Hirotaka Hachiya, Christopher Towell, Sethu Vijayakumar

Research output: Contribution to journalArticlepeer-review

Abstract

The least-squares policy iteration approach works efficiently in value function approximation, given appropriate basis functions. Because of its smoothness, the Gaussian kernel is a popular and useful choice as a basis function. However, it does not allow for discontinuity which typically arises in real-world reinforcement learning tasks. In this paper, we propose a new basis function based on geodesic Gaussian kernels, which exploits the non-linear manifold structure induced by the Markov decision processes. The usefulness of the proposed method is successfully demonstrated in simulated robot arm control and Khepera robot navigation.
Original languageEnglish
Pages (from-to)287-304
Number of pages18
JournalAutonomous Robots
Volume25
Issue number3
DOIs
Publication statusPublished - 2008

Keywords / Materials (for Non-textual outputs)

  • Reinforcement learning
  • Value function approximation
  • Markov decision process
  • Least-squares policy iteration
  • Gaussian kernel

Fingerprint

Dive into the research topics of 'Geodesic Gaussian kernels for value function approximation'. Together they form a unique fingerprint.

Cite this