Efficient State-Space Representation by Neural Maps for Reinforcement Learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

For some reinforcement learning algorithms the optimality of the generated strategies can be proven. In practice, however, restrictions in the number of training examples and computational resources corrupt optimality. The efficiency of the algorithms depends strikingly on the formulation of the task, including the choice of the learning parameters and the representation of the system states. We propose here to improve the learning efficiency by an adaptive classification of the system states which tends to group together states if they are similar and aquire the same action during learning. The approach is illustrated by two simple examples. Two further applications serve as a test of the proposed algorithm.
Original languageEnglish
Title of host publicationClassification in the Information Age
Subtitle of host publicationProceedings of the 22nd Annual GfKl Conference, Dresden, March 4–6, 1998
PublisherSpringer
Pages302-309
Number of pages8
ISBN (Electronic)978-3-642-60187-3
ISBN (Print)978-3-540-65855-9
DOIs
Publication statusPublished - 1999

Publication series

NameStudies in Classification, Data Analysis, and Knowledge Organization
PublisherSpringer Berlin Heidelberg
ISSN (Print)1431-8814

Fingerprint

Dive into the research topics of 'Efficient State-Space Representation by Neural Maps for Reinforcement Learning'. Together they form a unique fingerprint.

Cite this