Natural Actor-Critic

Jan Peters, Sethu Vijayakumar, Stefan Schaal

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper investigates a novel model-free reinforcement learning architecture, the Natural Actor-Critic. The actor updates are based on stochastic policy gradients employing Amari’s natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by linear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gradients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm.
Original languageEnglish
Title of host publicationMachine Learning: ECML 2005
Subtitle of host publication16th European Conference on Machine Learning, Porto, Portugal, October 3-7, 2005. Proceedings
PublisherSpringer-Verlag GmbH
Number of pages12
ISBN (Electronic)978-3-540-31692-3
ISBN (Print)978-3-540-29243-2
Publication statusPublished - 3 Oct 2005
Event16th European Conference on Machine Learning - Porto, Portugal
Duration: 3 Oct 20057 Oct 2005

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin / Heidelberg
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference16th European Conference on Machine Learning
Abbreviated titleECML 2005


Dive into the research topics of 'Natural Actor-Critic'. Together they form a unique fingerprint.

Cite this